scispace - formally typeset
Search or ask a question

Showing papers on "Simulated annealing published in 1991"


Journal ArticleDOI
TL;DR: This is the second in a series of three papers that empirically examine the competitiveness of simulated annealing in certain well-studied domains of combinatorial optimization.
Abstract: This is the second in a series of three papers that empirically examine the competitiveness of simulated annealing in certain well-studied domains of combinatorial optimization. Simulated annealing is a randomized technique proposed by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi for improving local optimization algorithms. Here we report on experiments at adapting simulated annealing to graph coloring and number partitioning, two problems for which local optimization had not previously been thought suitable. For graph coloring, we report on three simulated annealing schemes, all of which can dominate traditional techniques for certain types of graphs, at least when large amounts of computing time are available. For number partitioning, simulated annealing is not competitive with the differencing algorithm of N. Karmarkar and R. M. Karp, except on relatively small instances. Moreover, if running time is taken into account, natural annealing schemes cannot even outperform multiple random runs of the local optimization algorithms on which they are based, in sharp contrast to the observed performance of annealing on other problems.

904 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used simulated annealing to find the critical temperature at which a solid in a heat bath is heated by increasing the temperature, followed by slow cooling until it reaches the global minimum energy state.
Abstract: The seismic inverse problem involves finding a model m that either minimizes the error energy between the data and theoretical seismograms or maximizes the cross-correlation between the synthetics and the observations. We are, however, faced with two problems: (1) the model space is very large, typically of the order of 50 50 ; and, (2) the error energy function is multimodal. Existing calculus-based methods are local in scope and easily get trapped in local minima of the energy function. Other methods such as 'simulated annealing' and 'genetic algorithms' can be applied to such global optimization problems and they do not depend on the starting model. Both of these methods bear analogy to natural systems and are robust in nature. For example, simulated annealing is the analog to a physical process in which a solid in a 'heat bath' is heated by increasing the temperature, followed by slow cooling until it reaches the global minimum energy state where it forms a crystal. To use simulated annealing efficiently for 1-D seismic waveform inversion, we require a modeling method that rapidly performs the forward modeling calculation and a cooling schedule that will enable us to find the global minimum of the energy function rapidly. With the advent of vector computers, the reflectivity method has proved successful and the time of the calculation can be reduced substantially if only plane-wave seismograms are required. Thus, the principal problem with simulated annealing is to find the critical temperature, i.e., the temperature at which crystallization occurs. By initiating the simulated annealing process with different starting temperatures for a fixed number of iterations with a very slow cooling, we noticed that by starting very near but just above the critical temperature, we reach very close to the global minimum energy state very rapidly. We have applied this technique successfully to band-limited synthetic data in the presence of random noise. In most cases we find that we are able to obtain very good solutions using only a few plane wave seismograms.

458 citations


Journal ArticleDOI
TL;DR: The simulated annealing approach for solving optimization problems is described and is proposed for solving the clustering problem and the parameters of the algorithm are discussed in detail and it is shown that the algorithm converges to a global solution of the clustered problem.

435 citations


Journal ArticleDOI
TL;DR: In this article, a stochastic approach based on the simulated annealing algorithm is proposed for global optimization problems, which can be defined as the problem of finding points on a bounded subset of Ω(n) points in which some real valued function f assumes its optimal (maximal or minimal) value.
Abstract: In this paper we are concerned with global optimization, which can be defined as the problem of finding points on a bounded subset of ℝ n in which some real valued functionf assumes its optimal (maximal or minimal) value. We present a stochastic approach which is based on the simulated annealing algorithm. The approach closely follows the formulation of the simulated annealing algorithm as originally given for discrete optimization problems. The mathematical formulation is extended to continuous optimization problems, and we prove asymptotic convergence to the set of global optima. Furthermore, we discuss an implementation of the algorithm and compare its performance with other well-known algorithms. The performance evaluation is carried out for a standard set of test functions from the literature.

391 citations


Journal ArticleDOI
TL;DR: In this paper, the applicability of genetic algorithms to the inversion of plane-wave seismograms was investigated, where a random walk in model space and a transition probability rule were used to help guide their search.
Abstract: Seismic waveform inversion is one of many geophysical problems which can be identified as a nonlinear multiparameter optimization problem. Methods based on local linearization fail if the starting model is too far from the true model. We have investigated the applicability of “Genetic Algorithms” (GA) to the inversion of plane‐wave seismograms. Like simulated annealing, genetic algorithms use a random walk in model space and a transition probability rule to help guide their search. However, unlike a single simulated annealing run, the genetic algorithms search from a randomly chosen population of models (strings) and work with a binary coding of the model parameter set. Unlike a pure random search, such as in a “Monte Carlo” method, the search used in genetic algorithms is not directionless. Genetic algorithms essentially consist of three operations, selection, crossover, and mutation, which involve random number generation, string copies, and some partial string exchanges. The choice of the initial popul...

378 citations


Journal ArticleDOI
Fayez F. Boctor1
TL;DR: In this article, a linear zero-one formulation was proposed to solve the machine-part group formation problem in cellular manufacturing systems, where the integrality conditions of the proposed formulation can be relaxed.
Abstract: The machine-part group formation is an important issue in the design of cellular manufacturing systems. The present paper first discusses some of the alternative formulations of this problem, their advantages and disadvantages, and then suggests a new linear zero-one formulation which seems to have removed most of the disadvantages observed in other models. It will be shown that most of the integrality conditions of the proposed formulation can be relaxed. This considerably improves its computational feasibility and efficiency. Finally, a simulated annealing approach to deal with large-scale problems is also presented.

332 citations


Journal ArticleDOI
David Abramson1
TL;DR: This paper considers a solution to the school timetabling problem and describes the simulated annealing method, which can provide a faster solution than the equivalent sequential algorithm.
Abstract: This paper considers a solution to the school timetabling problem. The timetabling problem involves scheduling a number of tuples, each consisting of class of students, a teacher, a subject and a room, to a fixed number of time slots. A Monte Carlo scheme called simulated annealing is used as an optimisation technique. The paper introduces the timetabling problem, and then describes the simulated annealing method. Annealing is then applied to the timetabling problem. A prototype timetabling environment is described followed by some experimental results. A parallel algorithm which can be implemented on a multiprocessor is presented. This algorithm can provide a faster solution than the equivalent sequential algorithm. Some further experimental results are given.

322 citations



Journal ArticleDOI
TL;DR: This heuristic, probabilistic optimization method seeks minima in analogy with the annealing of solids and is effective on large-scale problems, including optimization with multiple groundwater control technologies.
Abstract: Simulated annealing is introduced and applied to the optimization of groundwater management problems cast in combinatorial form. This heuristic, probabilistic optimization method seeks minima in analogy with the annealing of solids and is effective on large-scale problems. No continuity requirements are imposed on objective (cost) functions. Constraints may be added to the cost function via penalties, imposed by designation of the solution domain, or imbedded in submodels (e.g., mass balance in aquifer flow simulators) used to evaluate costs. The location of global optima may be theoretically guaranteed, but computational limitations lead to searches for nearly optimal solutions in practice. Like other optimization methods, most of the computational effort is expended in flow and transport simulators. Practical algorithmic guidance that leads to enormous computational savings and sometimes makes simulated annealing competitive with gradient-type optimization methods is provided. The method is illustrated by example applications to idealized problems of groundwater flow and selection of remediation strategy, including optimization with multiple groundwater control technologies. They demonstrate the flexibility of the method and indicate its potential for solving groundwater management problems. The application of simulated annealing to water resources problems is new and its development is immature, so further performance improvements can be expected.

234 citations


Journal ArticleDOI
TL;DR: Focalization, which simultaneously focuses and localizes, eliminates this stringent requirement by including the environment in the parameter search space by defining an appropriate high-resolution cost function and utilizing a nonlinear optimization method to search the parameter landscape for the global minimum of the cost function.
Abstract: Conventional matched‐field processing (MFP) requires accurate knowledge of the ocean‐acoustic environment. Focalization, which simultaneously focuses and localizes, eliminates this stringent requirement by including the environment in the parameter search space. This generalization of MFP involves defining an appropriate high‐resolution cost function, parametrizing the search space of the environment and source, constructing solutions of the wave equation, and utilizing a nonlinear optimization method to search the parameter landscape for the global minimum of the cost function. Focalization is implemented using cost functions based on ray theory and wave theory, empirical orthogonal functions for the environmental description, and simulated annealing for optimization. Numerical simulations are presented to demonstrate the feasibility of focalization.

232 citations


Journal ArticleDOI
TL;DR: In this article, a single optimization problem is proposed to solve simultaneously the optimal utility consumption level, matches and network configuration, which can be applied to both pseudo-pinch and strict pinch design problems.

Proceedings ArticleDOI
01 Jan 1991
TL;DR: The authors succeeded in minimizing a 17-variable function by the use of the BDD representation of intermediate functions and the introduction of pruning by a novel exact algorithm and gradual improvement methods for minimizing binary decision diagrams.
Abstract: The authors present a novel exact algorithm and gradual improvement methods for minimizing binary decision diagrams (BDDs). In the exact minimization algorithm, the optimum order is searched by the exchanges of variables of BDDs based on the framework of the algorithm of S.J. Friedman and K.J. Supowit (1990). The use of the BDD representation of a given function and intermediate functions makes it possible to produce pruning into the method, which drastically reduces the computation cost. The authors succeeded in minimizing a 17-variable function by the use of the BDD representation of intermediate functions and the introduction of pruning. They also propose a greedy method and a simulated annealing method based on exchanges of two arbitrary variables, and a greedy method based on exchanges of adjacent m variables for m=3 and 4. >

Journal ArticleDOI
TL;DR: A collection of heuristics for the single machine total (weighted) tardiness problem are presented and it is indicated that straightforward interchange methods perform remarkably well.
Abstract: This paper presents a collection of heuristics for the single machine total (weighted) tardiness problem. The methods considered range from simple quick and dirty heuristics to more sophisticated algorithms exploiting problem structure. These heuristics are compared to interchange and simulated annealing methods on a large set of test problems. For the total tardiness problem a heuristic based on decomposition performs very well, whereas for the total weighted tardiness problem simulated annealing appears to be a viable approach. Our computational results also indicate that straightforward interchange methods perform remarkably well.

Journal ArticleDOI
TL;DR: In this paper, a new optimization protocol is proposed based on a combination of a mean field approximation and simulated annealing, where instead of optimizing the energy of the real system, the energy minimization of a new mean field system is minimized.
Abstract: A new optimization protocol is proposed which is based on a combination of a mean field approximation and simulated annealing. Instead of optimizing the energy of the real system the energy of a new mean field system is minimized. The global minimum of the new system and the original system is the same. The mean‐field optimization is advantageous to the optimization of the real system since (a) More statistics are obtained for alternative solutions and (b) the barrier heights separating the minima are reduced compared to the real system. Computational examples are provided for placement of side chains in tetrapeptides and in a small protein BPTI.

Journal ArticleDOI
TL;DR: In this paper, the problem of thermal power plant generator maintenance scheduling is formulated as a mixed-integer programming problem and solved by using an optimization method known as simulated annealing, which assumes an analogy between a physical multiparticle system and a combinatorial optimization problem.
Abstract: The thermal power plant generator maintenance scheduling problem is addressed. The problem is formulated as a mixed-integer programming problem, and it is solved by using an optimization method known as simulated annealing. Since the simulated annealing method assumes an analogy between a physical multiparticle system and a combinatorial optimization problem, a global minimum can be found with high probability through a careful annealing process. Numerical results on a real-scale test system are given, and the effectiveness of the proposed method is demonstrated. >

Journal ArticleDOI
TL;DR: In this article, a method of solving the floorplan design problem using distributed genetic algorithms is presented, based on the paleontological theory of punctuated equilibria, which offers a conceptual modification to the traditional genetic algorithms.
Abstract: Designing a VLSI floorplan calls for arranging a given set of modules in the plane to minimize the weighted sum of area and wire-length measures. A method of solving the floorplan design problem using distributed genetic algorithms is presented. Distributed genetic algorithms, based on the paleontological theory of punctuated equilibria, offer a conceptual modification to the traditional genetic algorithms. Experimental results on several problem instances demonstrate the efficacy of this method and indicate the advantages of this method over other methods, such as simulated annealing. The method has performed better than the simulated annealing approach, both in terms of the average cost of the solutions found and the best-found solution, in almost all the problem instances tried. >

Journal ArticleDOI
TL;DR: Most of the important results on the theory of Simulated Annealing are reviewed, placing them in a unified framework and new results are reported as well.
Abstract: Simulated Annealing has been a very successful general algorithm for the solution of large, complex combinatorial optimization problems. Since its introduction, several applications in different fields of engineering, such as integrated circuit placement, optimal encoding, resource allocation, logic synthesis, have been developed. In parallel, theoretical studies have been focusing on the reasons for the excellent behavior of the algorithm. This paper reviews most of the important results on the theory of Simulated Annealing, placing them in a unified framework. New results are reported as well.

Journal ArticleDOI
TL;DR: In this paper, a comparison of two methods for surmounting the multiple-minima problem, Simulated Annealing (SA) and Monte Carlo with Minimization (MCM), is presented with applications to [Met]-enkephalin in the absence and in the presence of water.
Abstract: A comparison of two methods for surmounting the multiple-minima problem, Simulated Annealing (SA) and Monte Carlo with Minimization (MCM), is presented with applications to [Met]-enkephalin in the absence and in the presence of water. SA explores a continuous space of internal variables, while MCM explores a discrete space consisting of the local energy minima on that space. Starting from random conformations chosen from the whole conformational space in both cases, it is found that, while SA converges to low-energy structures significantly faster than MCM, the former does not converge to a unique minimum whereas the latter does. Furthermore, the behavior of the RMS deviations with respect to the apparent global minimum (for enkephalin in the absence of water) shows no correlation with the observed overall energy decrease in the case of SA, whereas such a correlation is quite evident with MCM; this implies that, even though the potential energy decreases in the annealing process, the Monte Carlo SA trajectory does not proceed towards the global minimum. Possible reasons for these differences between the two methods are discussed. It is concluded that, while SA presents attractive prospects for possibly improving or refining given structures, it must be considered inferior to MCM, at least in problems where little or no structural information is available for the molecule of interest.


Journal ArticleDOI
TL;DR: In this article, the optimal placement of active and passive members in complex truss structures was studied using the simulated annealing technique, where the authors adopted the maximization of the cumulative energy dissipated over a finite time interval as the measure of optimality.
Abstract: Active structural members with built-in sensing, feedback control, and actuation functions are used herein, along with passively damped members, to augment the inherent damping in truss structures. The effective use of such members makes it desirable to distribute them optimally throughout the structure. For simple structural systems, it is possible to place these members with some degree of optimality on the basis of engineering judgment. However, for more complex systems, the number of possible choices is so large that one may have to rely on a more formal optimization technique. This paper deals with the optimal placement of active and passive members in complex truss structures. The problem falls in the class of combinatorial optimization, for which the solution becomes exceedingly intractable as the problem size increases. This difficulty is overcome herein by use of the simulated annealing technique. We adopt the maximization of the cumulative energy dissipated over a finite time interval as the measure of optimality. The selection of nearly optimal locations for both passive and active members is consistently treated through the use of the finite-time energy dissipation criterion within the framework of the simulated annealing algorithm. Numerical examples are used to illustrate the effectiveness of this methodology.

Journal ArticleDOI
Fabio Schoen1
TL;DR: Stochastic algorithms for global optimization are reviewed with the aim of presenting recent papers on the subject, which have received only scarce attention in the most recent published surveys.
Abstract: In this paper stochastic algorithms for global optimization are reviewed. After a brief introduction on random-search techniques, a more detailed analysis is carried out on the application of simulated annealing to continuous global optimization. The aim of such an analysis is mainly that of presenting recent papers on the subject, which have received only scarce attention in the most recent published surveys. Finally a very brief presentation of clustering techniques is given.

Journal ArticleDOI
TL;DR: In this article, a preliminary study of the application of simulated annealing (SA) to complex permittivity reconstruction in microwave tomography is presented, and the results show that SA can converge to an accurate solution in cases where the two deterministic methods fail.
Abstract: A preliminary study of the application of simulated annealing (SA) to complex permittivity reconstruction in microwave tomography is presented. Reconstructions of a simplified model of a human arm obtained with simulated noise-free data are presented for three different methods: SA, quenching, and a Newton-Kantorovich method. These results show that SA can converge to an accurate solution in cases where the two deterministic methods fail. For this reason SA can be used to get closer to the final solution before applying a faster deterministic method. >

Journal ArticleDOI
TL;DR: In this article, a simulated annealing strategy is developed for use in the discrete optimization of three-dimensional steel frames, which randomly perturbs the current design to create a candidate design.
Abstract: A simulated annealing strategy is developed for use in the discrete optimization of three-dimensional steel frames. This strategy randomly perturbs the current design to create a candidate design. A probabilistic acceptance criterion is then employed to determine whether the candidate design should replace the current design or be rejected. This acceptance criterion allows worse designs to be accepted in the initial stages of the strategy. The likelihood of accepting worse designs is small in the final stages of the strategy. The strategy is presented and illustrated on a three-dimensional, six-story, unsymmetrical frame. The frame is realistically loaded with gravity and seismic loads. Members in the frame must be selected from among discrete standardized shapes. The strategy is able to treat multiple section properties per member without having to curve-fit dependent properties as functions of a single independent property. Performance of the strategy is compared to that of the branch-and-bound method. Approximation techniques aimed at reducing computation time are investigated.

Proceedings ArticleDOI
01 Jun 1991
TL;DR: A branch-and-bound placement technique for building block layout that effectively searches for an optimal placement in the whole solution space and decomposes the problem hierarchically and applies the method to each subproblem.
Abstract: We present a branch-and-bound placement technique for building block layout that effectively searches for an optimal placement in the whole solution space. We first describe a block placement problem and its solution space. Then we explain branching and bounding operations designed for the placement problem. Constraints on critical nets and/or the shape of a resulting chip can be taken into account in the search process. Experiments reveals that the number of blocks the method can manage is around six if the whole solution space is explored. For a problem which contains more blocks than the limit, we decompose the problem hierarchically and apply the method to each subproblem. The results for standard benchmark examples and a comparison with those of other systems are given to demonstrate the performance of the method.

Journal ArticleDOI
01 Jun 1991
TL;DR: The planning problem for a mobile manipulator system that must perform a sequence of tasks defined by position, orientation, force, and moment vectors at the end-effector is considered and simulated annealing is proposed as a general solution method for obtaining near-optimal results.
Abstract: The planning problem for a mobile manipulator system that must perform a sequence of tasks defined by position, orientation, force, and moment vectors at the end-effector is considered. Each task can be performed in multiple configurations due to the redundancy introduced by mobility. The planning problem is formulated as an optimization problem in which the decision variables for mobility (base position) are separated from the manipulator joint angles in the cost function. The resulting numerical problem is nonlinear with nonconvex, unconnected feasible regions in the decision space. Simulated annealing is proposed as a general solution method for obtaining near-optimal results. The problem formulation and numerical solution by simulated annealing are illustrated for a manipulator system with three degrees of freedom mounted on a base with two degrees of freedom. The results are compared with results obtained by conventional nonlinear programming techniques customized for the particular example system. >

Journal ArticleDOI
TL;DR: Stochastic evolution can be specifically tailored to solve the network bisection, traveling salesman, and standard cell placement problems and Experimental results show that SE can produce better quality solutions than sophisticated simulated annealing (SA)-based heuristics in a much shorter computation time.
Abstract: A novel technique is introduced, called stochastic evolution (SE), for solving a wide range of combinatorial optimization problems It is shown that SE can be specifically tailored to solve the network bisection, traveling salesman, and standard cell placement problems Experimental results for these problems show that SE can produce better quality solutions than sophisticated simulated annealing (SA)-based heuristics in a much shorter computation time >

Journal ArticleDOI
TL;DR: In this article, a parallel simulated annealing algorithm that is problem-independent, maintains the serial decision sequence, and obtains speedup which can exceed log/sub 2/P on P processors is discussed.
Abstract: A parallel simulated annealing algorithm that is problem-independent, maintains the serial decision sequence, and obtains speedup which can exceed log/sub 2/P on P processors is discussed. The algorithm achieves parallelism by using the concurrency technique of speculative computation. Implementation of the parallel algorithm on a hypercube multiprocessor and application to a task assignment problem are described. The simulated annealing solutions are shown to be, on average, 28% better than the solutions produced by a random task assignment algorithm and 2% better than the solutions produced by a heuristic. >

Journal ArticleDOI
TL;DR: X-ray crystallography is an increasingly impor­ tant tool for understanding structure, function, and control of biological macromolecules and several computational procedures are required to solve and refine the structure.
Abstract: X-ray crystallography (see Refs. 1 , 2 for reviews) is an increasingly impor­ tant tool for understanding structure, function, and control of biological macromolecules. Developments in genetics, data collection, and computer hardware have produced an unprecedented growth of macromolecular crystallographic studies. X-ray crystallography produces large amounts of diffraction data, whose interpretation is entirely dependent upon the availability of powerful computers and sophisticated algorithms. After crystallization and data collection, several computational procedures are required to solve and refine the structure. These procedures include methods of phasing, density modification, chain tracing, refinement, and correction of errors. Many of these computational procedures can be formulate:d as nonlinear optimization problems: One tries to optimize a target function, usually thc discrcpancy bctween observed and computed diffraction data, as a function of certain parameters, such as phases, scale factors between structure factors, or parameters of an atomic model. Optimization problems in macromolecular crystallography suffer from the multiple minimum problem. A case in point is crystallographic refine­ ment, in which one wants to improve the agreement of an atomic model with the diffraction data. The high-dimensionality of the parameter space of the atomic model (typically three times the number of atoms) introduces many local minima of the target function; thus, gradient descent methods, such as conjugate gradient minimization or least-squares methods (3),

Journal ArticleDOI
TL;DR: Although the PCFT objective function gives consistently lower estimates of normal tissue complication probabilities, the ability to specify individualized dose-volume limits, and therefore the individualized probability of complication, for an individual organ makes the MDVL objective function more useful for treatment planning.
Abstract: Constrained simulated annealing is used with a simple annealing schedule to optimize beam weights and angles in radiation therapy treatment planning. Constrained simulated annealing is demonstrated using two contrasting objective functions. The first objective function maximizes the probability of a complication-free treatment (PCFT) by minimizing the normal tissue complications subject to the constraint that the entire target volume receives a prescribed minimum turmourcidal dose with a specified dose homogeneity. The second objective function maximizes the isocentre dose subject to a set of customized normal tissue dose-volume and target volume dose homogeneity constraints (MVDL). Although the PCFT objective function gives consistently lower estimates of normal tissue complication probabilities, the ability to specify individualized dose-volume limits, and therefore the individualized probability of complication, for an individual organ makes the MVDL objective function more useful for treatment planning.

Journal ArticleDOI
01 Jul 1991
TL;DR: A general-purpose program, INTEROPT, is described, which finds the minimum of arbitrary functions, with user-friendly, quasi-natural-language input, and optimizes functions of up to 30 variables.
Abstract: A numerical method for finding the global minimum of nonconvex functions is presented. The method is based on the principles of simulated annealing, but handles continuously valued variables in a natural way. The method is completely general, and optimizes functions of up to 30 variables. Several examples are presented. A general-purpose program, INTEROPT, is described, which finds the minimum of arbitrary functions, with user-friendly, quasi-natural-language input. >