scispace - formally typeset
Search or ask a question

Showing papers on "Simulated annealing published in 2002"


Journal ArticleDOI
15 May 2002-Proteins
TL;DR: An all‐atom force field aimed at protein and nucleotide optimization in vacuo (NOVA), which has been specifically designed to avoid this problem and can be applied to modeling applications as well as X‐ray and NMR structure refinement.
Abstract: One of the conclusions drawn at the CASP4 meeting in Asilomar was that applying various force fields during refinement of template-based models tends to move predictions in the wrong direction, away from the experimentally determined coordinates. We have derived an all-atom force field aimed at protein and nucleotide optimization in vacuo (NOVA), which has been specifically designed to avoid this problem. NOVA resembles common molecular dynamics force fields but has been automatically parameterized with two major goals: (i) not to make high resolution X-ray structures worse and (ii) to improve homology models built by WHAT IF. Force-field parameters were not required to be physically correct; instead, they were optimized with random Monte Carlo moves in force-field parameter space, each one evaluated by simulated annealing runs of a 50-protein optimization set. Errors inherent to the approximate force-field equation could thus be canceled by errors in force-field parameters. Compared with the optimization set, the force field did equally well on an independent validation set and is shown to move in silico models closer to reality. It can be applied to modeling applications as well as X-ray and NMR structure refinement. A new method to assign force-field parameters based on molecular trees is also presented. A NOVA server is freely accessible at http://www.yasara.com/servers

1,354 citations


Journal ArticleDOI
TL;DR: An ant colony optimization (ACO) approach for the resource-constrained project scheduling problem (RCPSP) is presented in this paper, where several new features that are interesting for ACO are proposed and evaluated.
Abstract: An ant colony optimization (ACO) approach for the resource-constrained project scheduling problem (RCPSP) is presented. Several new features that are interesting for ACO in general are proposed and evaluated. In particular, the use of a combination of two pheromone evaluation methods by the ants to find new solutions, a change of the influence of the heuristic on the decisions of the ants during the run of the algorithm, and the option that an elitist ant forgets the best-found solution are studied. We tested the ACO algorithm on a set of large benchmark problems from the Project Scheduling Library. Compared to several other heuristics for the RCPSP, including genetic algorithms, simulated annealing, tabu search, and different sampling methods, our algorithm performed best on average. For nearly one-third of all benchmark problems, which were not known to be solved optimally before, the algorithm was able to find new best solutions.

671 citations


Journal ArticleDOI
TL;DR: An automated planning system in which the traditional intensity optimization is bypassed, and instead the shapes and the weights of the apertures are optimized, which produces highly efficient treatment deliveries that maintain the full dosimetric benefits of IMRT.
Abstract: IMRT treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions (or maps) for each beam angle. The optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes. In this article, we introduce an automated planning system in which we bypass the traditional intensity optimization, and instead directly optimize the shapes and the weights of the apertures. We call this approach “direct aperture optimization.” This technique allows the user to specify the maximum number of apertures per beam direction, and hence provides significant control over the complexity of the treatment delivery. This is possible because the machine dependent delivery constraints imposed by the MLC are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step. The leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm. We have tested direct aperture optimization on a variety of patient cases using the EGS4/BEAM Monte Carlo package for our dose calculation engine. The results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment plans using only three to five apertures per beam direction. As compared with traditional optimization strategies, our studies demonstrate that direct aperture optimization can result in a significant reduction in both the number of beam segments and the number of monitor units. Direct aperture optimization therefore produces highly efficient treatment deliveries that maintain the full dosimetric benefits of IMRT.

381 citations


Journal ArticleDOI
TL;DR: This paper presents a method for solving the multi-depot location-routing problem (MDLRP) in which several unrealistic assumptions are relaxed and the setting of parameters throughout the solution procedure for obtaining quick and favorable solutions is suggested.

372 citations


Journal ArticleDOI
TL;DR: An efficient algorithm for loss minimization by using an automatic switching operation in large-scale distribution systems and utilizing the polynomial-time cooling schedule, which is based on the statistical calculation during the search, is presented.
Abstract: This paper presents an efficient algorithm for loss minimization by using an automatic switching operation in large-scale distribution systems. Simulated annealing is particularly well suited for a large combinatorial optimization problem since it can avoid local minima by accepting improvements in cost. However, it often requires meaningful cooling schedule and a special strategy, which makes use of the property of distribution systems in finding the optimal solution. In this paper, we augment the cost function with the operation condition of distribution systems, improve the perturbation mechanism with system topology, and utilize the polynomial-time cooling schedule, which is based on the statistical calculation during the search. The validity and effectiveness of the proposed methodology is demonstrated in the Korea Electric Power Corporation's distribution system.

290 citations


Journal ArticleDOI
TL;DR: In this article, the problem of designing spatially cohesive nature reserve systems that meet biodiversity objectives is formulated as a nonlinear integer programming problem, where the multiobjective function minimises a combination of boundary length, area and failed representation of the biological attributes we are trying to conserve.
Abstract: The problem of designing spatially cohesive nature reserve systems that meet biodiversity objectives is formulated as a nonlinear integer programming problem. The multiobjective function minimises a combination of boundary length, area and failed representation of the biological attributes we are trying to conserve. The task is to reserve a subset of sites that best meet this objective. We use data on the distribution of habitats in the Northern Territory, Australia, to show how simulated annealing and a greedy heuristic algorithm can be used to generate good solutions to such large reserve design problems, and to compare the effectiveness of these methods.

269 citations


Journal ArticleDOI
TL;DR: In this paper, simulated annealing was used to invert fundamental and higher-mode Rayleigh wave dispersion curves simultaneously for an S-wave velocity profile, and the inversion was applied to near-surface seismic data (with a maximum depth of investigation of around 10 m) obtained over a thick lacustrine clay sequence.
Abstract: SUMMARY Simulated annealing was used to invert fundamental and higher-mode Rayleigh wave dispersion curves simultaneously for an S-wave velocity profile. The inversion was applied to near-surface seismic data (with a maximum depth of investigation of around 10 m) obtained over a thick lacustrine clay sequence. The geology was described either in terms of discrete layers or by a superposition of Chebyshev polynomials in the inversion and the contrasting results compared. Simulated annealing allows for considerable flexibility in model definition and parametrization and seeks a global rather than a local minimum in a misfit function. It has the added advantage in that it can be used to determine uncertainties in inversion parameters, thereby highlighting features in an inverted profile that should be interpreted with caution. Results show that simulated annealing works well for the inversion of multimodal near-surface Rayleigh wave dispersion curves relative to the same inversion that employs only the fundamental mode.

261 citations


Journal ArticleDOI
TL;DR: An optimization framework based on simulated annealing is used for site selection and for base-station configuration and shows that cellular network design problems are tractable for realistic problem instances.
Abstract: This paper deals with the automatic selection and configuration of base station sites for mobile cellular networks. An optimization framework based on simulated annealing is used for site selection and for base-station configuration. Realistic path-loss estimates incorporating terrain data are used. The configuration of each base station involves selecting antenna type, power control, azimuth, and tilt. Results are presented for several design scenarios with between 250 and 750 candidate sites and show that the optimization framework can generate network designs with desired characteristics such as high area coverage and high traffic capacity. The work shows that cellular network design problems are tractable for realistic problem instances.

244 citations


Journal ArticleDOI
TL;DR: Simulated annealing (SA), a meta-heuristic, is employed in this study to determine a scheduling policy so as to minimize total tardiness, and shows that the proposed SA method significantly outperforms a neighborhood search method in terms of total tardyness.
Abstract: This paper presents a scheduling problem for unrelated parallel machines with sequence-dependent setup times, using simulated annealing (SA). The problem accounts for allotting work parts of L jobs into M parallel unrelated machines, where a job refers to a lot composed of N items. Some jobs may have different items while every item within each job has an identical processing time with a common due date. Each machine has its own processing times according to the characteristics of the machine as well as job types. Setup times are machine independent but job sequence dependent. SA, a meta-heuristic, is employed in this study to determine a scheduling policy so as to minimize total tardiness. The suggested SA method utilizes six job or item rearranging techniques to generate neighborhood solutions. The experimental analysis shows that the proposed SA method significantly outperforms a neighborhood search method in terms of total tardiness.

233 citations


Journal ArticleDOI
TL;DR: This article presents a comparative study for four modem heuristic algorithms to service restoration in distribution systems: reactive tabu search, tabU search, parallel simulated annealing, and genetic algorithm.
Abstract: This article presents a comparative study for four modem heuristic algorithms (MHAs) to service restoration in distribution systems: reactive tabu search, tabu search, parallel simulated annealing, and genetic algorithm. Since service restoration is an emergency control in distribution control centers to restore out-of-service areas as soon as possible, it requires fast computation and high quality solutions for customers' satisfaction. The problem can be formulated as a combinatorial optimization problem to divide the out-of-service area to each power source. The effectiveness of the MHAs is compared against each other on typical service restoration problems.

230 citations


Journal ArticleDOI
TL;DR: This paper demonstrates how simulated annealing, a heuristic algorithm, can be used to solve high-dimensional non-linear optimisation problems for multi-site land use allocation (MLUA) problems and minimises development costs and maximises spatial compactness of the land use.
Abstract: Many resource allocation issues, such as land use- or irrigation plan- ning, require input from extensive spatial databases and involve complex decision- making problems. Spatial decision support systems (SDSS) are designed to make these issues more transparent and to support the design and evaluation ofresource allocation alternatives. Recent developments in this e eld focus on the design of allocation plans that utilise mathematical optimisation techniques. These tech- niques, often referred to as multi-criteria decision-making (MCDM) techniques, run into numerical problems when faced with the high dimensionality encountered in spatial applications. In this paper we demonstrate how simulated annealing, a heuristic algorithm, can be used tosolve high-dimensional non-linear optimisation problems for multi-site land use allocation (MLUA) problems. The optimisation model both minimises development costs and maximises spatial compactness of the land use. Compactness is achieved by adding a non-linear neighbourhood objective to the objective function. The method is successfully applied to a case study in Galicia, Spain, using an SDSS for supporting the restoration of a former mining area with new land use.

Journal ArticleDOI
TL;DR: Numerical results show that all EP algorithms are capable of finding very nearly global solutions within a reasonable time, but an EP-algorithm with the better of the Gaussian and Cauchy mutations appears to be the best among all EPs in terms of convergence speed, solution time, and cost.
Abstract: Fast evolutionary programming techniques are applied for the solution of a short-term hydrothermal scheduling problem. Evolutionary programming (EP)-based algorithms with Gaussian and other mutation techniques have been developed and tested on a multi-reservoir cascaded hydroelectric system having prohibited operating zones and a thermal unit with valve point loading. Numerical results show that all of the EP algorithms are capable of finding very nearly global solutions within a reasonable time but an EP algorithm with better of Gaussian and Cauchy mutations appears to be the best amongst all EPs in terms of convergence speed, solution time, and minimum cost.

Book ChapterDOI
TL;DR: In this paper, a new hybrid algorithm for examination timetabling is presented, consisting of three phases: a constraint programming phase to develop an initial solution, a simulated annealing phase to improve the quality of solution, and a hill climbing phase for further improvement.
Abstract: Examination timetabling is a well-studied combinatorial optimization problem. We present a new hybrid algorithm for examination timetabling, consisting of three phases: a constraint programming phase to develop an initial solution, a simulated annealing phase to improve the quality of solution, and a hill climbing phase for further improvement. The examination timetabling problem at the University of Melbourne is introduced, and the hybrid method is proved to be superior to the current method employed by the University. Finally, the hybrid method is compared to established methods on the publicly available data sets, and found to perform well in comparison.

Journal ArticleDOI
TL;DR: In this paper, a hybrid GA and simulated annealing (SA) approach is used to solve the problem of process planning for a prismatic part in a dynamic workshop environment, where the activities of selecting machining resources, determining set-up plans and sequencing machining operations are simultaneously considered.
Abstract: For a CAPP system in a dynamic workshop environment, the activities of selecting machining resources, determining set-up plans and sequencing machining operations should be considered simultaneously to achieve the global lowest machining cost. Optimizing process plans for a prismatic part usually suffer from complex technological requirements and geometric relationships between features in the part. Here, process planning is modelled as a combinatorial optimization problem with constraints, and a hybrid genetic algorithm (GA) and simulated annealing (SA) approach has been developed to solve it. The evaluation criterion of machining cost comes from the combined strengths of machine costs, cutting tool costs, machine changes, tool changes and set-ups. The GA is carried out in the first stage to generate some initially good process plans. Based on a few selective plans with Hamming distances between each other, the SA algorithm is employed to search for alternative optimal or near-optimal process plans. In t...

Journal ArticleDOI
TL;DR: A two-stage procedure, using a simulated annealing approach, was developed to tackle the mixed-model assembly line balancing problem with parallel workstations and zoning constraints, and the results show that even for large-scale problems the proposed procedure performs very well.
Abstract: This work presents a new mathematical programming model for the mixed-model assembly line balancing problem with parallel workstations and zoning constraints. It allows the user to control the process to create parallel workstations. The model's primary goal is to minimize the number of workstations along the line, for a given cycle time, and its secondary goal is to balance the workloads between and within workstations. A two-stage procedure, using a simulated annealing approach, was developed to tackle this complex problem. The first stage of the procedure looks for a sub-optimal solution to the problem's primary goal, whilst the second stage deals with the secondary goal. The procedure is illustrated with a numerical example and the results from computational experiments show that even for large-scale problems the proposed procedure performs very well.

Journal ArticleDOI
Herbert Meyr1
TL;DR: The simultaneous lotsizing and scheduling of several products on non-identical parallel production lines (heterogeneous machines) is addressed by combining the local search metastrategies threshold accepting (TA) and simulated annealing (SA) with dual reoptimization.

Journal ArticleDOI
TL;DR: The algorithm is applied in the construction of parsimonious quantitative structure-activity relationship (QSAR) models based on feed-forward neural networks and is tested on three classical data sets from the QSAR literature.
Abstract: We present a new feature selection algorithm for structure-activity and structure-property correlation based on particle swarms. Particle swarms explore the search space through a population of individuals that adapt by returning stochastically toward previously successful regions, influenced by the success of their neighbors. This method, which was originally intended for searching multidimensional continuous spaces, is adapted to the problem of feature selection by viewing the location vectors of the particles as probabilities and employing roulette wheel selection to construct candidate subsets. The algorithm is applied in the construction of parsimonious quantitative structure-activity relationship (QSAR) models based on feed-forward neural networks and is tested on three classical data sets from the QSAR literature. It is shown that the method compares favorably with simulated annealing and is able to identify a better and more diverse set of solutions given the same amount of simulation time.

Book ChapterDOI
01 Jan 2002
TL;DR: This procedure, based on Glover's taboo search for discrete functions, of solving the multiple minima problem for continuous functions is generally applicable, easy to implement, derivative-free, and conceptually simple.
Abstract: We decribe an approach, based on Taboo (or “Tabu”) Search for discrete functions, for solving the multiple-minima problem of continuous functions. As demonstrated by model calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. The procedure is generally applicable, derivative-free, easy to implement, conceptually simpler than Simulated Annealing and open to further improvement.

Journal ArticleDOI
TL;DR: On a test case with experimental data on the dimeric protein Desulforedoxin, the method proposed here supplied similar results in less than 10 minutes compared to approximately 10 hours of computation time for DYANA, showing that CP technology can greatly reduce computation time.
Abstract: In this paper we propose PSICO (Processing Structural Information with Constraint programming and Optimisation) as a constraint-based approach to determining protein structures compatible with distance constraints obtained from Nuclear Magnetic Resonance (NMR) data We compare the performance of our proposed algorithm with DYANA (“Dynamics algorithm for NMR applications”) an existing commercial application based on simulated annealing On a test case with experimental data on the dimeric protein Desulforedoxin, the method proposed here supplied similar results in less than 10 minutes compared to approximately 10 hours of computation time for DYANA Although the quality of results can still be improved, this shows that CP technology can greatly reduce computation time, a major advantage because structural NMR technique generally demands multiple runs of structural computation

Proceedings ArticleDOI
09 Jan 2002
TL;DR: A parallel simulated annealing algorithm to solve the vehicle routing problem with time windows is presented and the empirical evidence indicate that parallel simulatedAnnealing can be applied with success to bicriterion optimization problems.
Abstract: A parallel simulated annealing algorithm to solve the vehicle routing problem with time windows is presented. The objective is to find the best possible solutions to some well-known instances of the problem by using parallelism. The empirical evidence indicate that parallel simulated annealing can be applied with success to bicriterion optimization problems.

Journal ArticleDOI
TL;DR: An Ant Colony Optimization approach is proposed to solve the 2-machine flowshop scheduling problem with the objective of minimizing both the total completion time and the makespan criteria.

Posted Content
Abstract: We explain why quantum adiabatic evolution and simulated annealing perform similarly in certain examples of searching for the minimum of a cost function of n bits. In these examples each bit is treated symmetrically so the cost function depends only on the Hamming weight of the n bits. We also give two examples, closely related to these, where the similarity breaks down in that the quantum adiabatic algorithm succeeds in polynomial time whereas simulated annealing requires exponential time.

Journal ArticleDOI
TL;DR: This work describes an Ant Colony Optimization (ACO) algorithm having a new feature using look-ahead information in the transition rule that shows an improvement in performance.
Abstract: We compare several heuristics for solving a single machine scheduling problem. In the operating situation modelled, setup times are sequence-dependent and the objective is to minimize total tardiness. We describe an Ant Colony Optimization (ACO) algorithm having a new feature using look-ahead information in the transition rule. This feature shows an improvement in performance. A comparison with a genetic algorithm, a simulated annealing approach, a local search method and a branch-and-bound algorithm indicates that the ACO that we describe is competitive and has a certain advantage for larger problems.

Book ChapterDOI
18 Nov 2002
TL;DR: The Simulated Annealing scheduler is compared to a Ad-Hoc Greedy scheduler used in earlier experiments and exposes some assumptions built into the Ad-hoc scheduler and some problems with the Performance Model being used.
Abstract: Generating high quality schedules for distributed applications on a Computational Grid is a challenging problem. Some experiments using Simulated Annealing as a scheduling mechanism for a ScaLA-PACK LU solver on a Grid are described. The Simulated Annealing scheduler is compared to a Ad-Hoc Greedy scheduler used in earlier experiments. The Simulated Annealing scheduler exposes some assumptions built into the Ad-Hoc scheduler and some problems with the Performance Model being used.

Journal ArticleDOI
TL;DR: In this paper, a new prestack inversion algorithm was developed to simultaneously estimate acoustic and shear impedances from P-wave reflection seismic data, which uses a global optimization procedure in the form of simulated annealing.
Abstract: A new prestack inversion algorithm has been developed to simultaneously estimate acoustic and shear impedances from P‐wave reflection seismic data. The algorithm uses a global optimization procedure in the form of simulated annealing. The goal of optimization is to find a global minimum of the objective function, which includes the misfit between synthetic and observed prestack seismic data. During the iterative inversion process, the acoustic and shear impedance models are randomly perturbed, and the synthetic seismic data are calculated and compared with the observed seismic data. To increase stability, constraints have been built into the inversion algorithm, using the low‐frequency impedance and background Vs/Vp models. The inversion method has been successfully applied to synthetic and field data examples to produce acoustic and shear impedances comparable to log data of similar bandwidth. The estimated acoustic and shear impedances can be combined to derive other elastic parameters, which may be use...

Journal ArticleDOI
TL;DR: This paper compares three heuristic search algorithms: genetic algorithm (GA), simulated annealing (SA) and tabu search (TS), for hardware–software partitioning and shows that TS is superior to SA and GA in terms of both search time and quality of solutions.
Abstract: This paper compares three heuristic search algorithms: genetic algorithm (GA), simulated annealing (SA) and tabu search (TS), for hardware–software partitioning. The algorithms operate on functional blocks for designs represented as directed acyclic graphs, with the objective of minimising processing time under various hardware area constraints. Thecomparison involves a model for calculating processing time based on a non-increasing first-fit algorithm to schedule tasks, given that shared resource conflicts do not occur. The results show that TS is superior to SA and GA in terms of both search time and quality of solutions. In addition, we have implemented an intensification strategy in TS called penalty reward, which can further improve the quality of results.

Proceedings ArticleDOI
12 May 2002
TL;DR: The problem can be solved successfully by a genetic algorithm based hyperheuristic (hyper-GA) for scheduling geographically distributed training staff and courses, and results for four versions of the hyper-GA as well as a range of simpler heuristics and applying them to five test data set are presented.
Abstract: This paper investigates a genetic algorithm based hyperheuristic (hyper-GA) for scheduling geographically distributed training staff and courses. The aim of the hyper-GA is to evolve a good-quality heuristic for each given instance of the problem and use this to find a solution by applying a suitable ordering from a set of low-level heuristics. Since the user only supplies a number of low-level problem-specific heuristics and an evaluation function, the hyperheuristic can easily be reimplemented for a different type of problem, and we would expect it to be robust across a wide range of problem instances. We show that the problem can be solved successfully by a hyper-GA, presenting results for four versions of the hyper-GA as well as a range of simpler heuristics and applying them to five test data set.

Journal ArticleDOI
TL;DR: This paper significantly extends traditional facility location models by introducing several logistical cost components such as holding, ordering, and transportation costs in a multi-commodity, multilocation framework.

Posted Content
01 Jan 2002
TL;DR: This study uses Monte-Carlo simulations to determine the relative efficiency of a local search algorithm to 9 stochastic global algorithms.
Abstract: Training a neural network is a difficult optimization problem because of numerous local minimums. Many global search algorithms have been used to train neural networks. However, local search algorithms are more efficient with computational resources, and therefore numerous random restarts with a local algorithm may be more effective than a global algorithm. This study uses Monte-Carlo simulations to determine the relative efficiency of a local search algorithm to 9 stochastic global algorithms. The computational requirements of the global algorithms are several times higher than the local algorithm and there is little gain in using the global algorithms to train neural networks.

Journal ArticleDOI
TL;DR: The motivation for multiple robot control and the current state of the art in the field of cooperating robots are briefly given, followed by a discussion of energy minimization techniques in the context of robotics, and the principles of using genetic algorithms and simulated annealing as an optimization tool are included.