scispace - formally typeset
Search or ask a question

Showing papers on "Multi-swarm optimization published in 1987"



Journal ArticleDOI
TL;DR: In this article, the authors discuss some basic opportunities for the use of multiprocessing in the solution of optimization problems, including unconstrained optimization and global optimization, in the important case when function evaluation is expensive and gradients are evaluated by finite differences.
Abstract: This paper discusses some basic opportunities for the use of multiprocessing in the solution of optimization problems. We consider two fundamental optimization problems, unconstrained optimization and global optimization, in the important case when function evaluation is expensive and gradients are evaluated by finite differences. First we discuss some simple parallel strategies based upon the use of concurrent function evaluations to evaluate the finite difference gradient. These include the speculative evaluation of the gradient concurrently with the evaluation of the function before it is known whether the gradient value at this point will be required. We present examples that indicate the effectiveness of these parallel strategies for unconstrained optimization. We also give experimental results that show the effect of using these strategies to parallelize each of the multiple local minimizations within a recently proposed concurrent global optimization algorithm. We briefly discuss several parallel optimization strategies that are related to these approaches but make more fundamental changes to standard sequential optimization algorithms.

41 citations


Journal ArticleDOI
Qizhong Wang1
TL;DR: An evolutionary algorithm for combinatorial optimization that has more chances to find the global optimum and as many local optima as possible in a single run and is able to cover a larger region of space effectively.
Abstract: Based on the analogy between mathematical optimization and molecular evolution and on Eigen's quasi-species model of molecular evolution, an evolutionary algorithm for combinatorial optimization has been developed. This algorithm consists of a versatile variation scheme and an innovative decision rule, the essence of which lies in a radical revision of the conventional philosophy of optimization: A number of configurations of variables with better values, instead of only a single best configuration, are selected as starting points for the next iteration. As a result the search proceeds in parallel along a number of routes and is unlikely to get trapped in local optima. An important innovation of the algorithm is introduction of a constraint to let the starting points always keep a certain distance from each other so that the search is able to cover a larger region of space effectively. The main advantage of the algorithm is that it has more chances to find the global optimum and as many local optima as possible in a single run. This has been demonstrated in preliminary computational experiments.

36 citations


Book ChapterDOI
01 Jan 1987
TL;DR: The investigation of the structure of the parameter set leads to a necessary and sufficient criterion for the non-emptiness of the set of efficient points and the results are applied to a scalarization of multi-objective optimization problems.
Abstract: In this paper, efficient and weakly efficient points of a set are characterized by an optimization problem with a parameter in the bottleneck objective function. The investigation of the structure of the parameter set leads to a necessary and sufficient criterion for the non-emptiness of the set of efficient points. The continuous dependence of the optimization problem on the parameter is investigated. Finally, the results are applied to a scalarization of multi-objective optimization problems.

12 citations


Journal ArticleDOI
01 Mar 1987
TL;DR: An optimization strategy is presented that provides a frame-work in which optimization algorithms and heuristic procedures can be coupled to solve nonlinearly constrained design optimization problems.
Abstract: An optimization strategy is presented that provides a frame-work in which optimization algorithms and heuristic procedures can be coupled to solve nonlinearly constrained design optimization problems These problems cannot be efficiently solved by either approach independently The approach is based on an optimization algorithm dealing with local monotonicity and sequential quadratic programming techniques with heuristic procedures which are statistically derived from observations obtained by applying the optimization algorithm to different classes of test problems

1 citations


Journal ArticleDOI
TL;DR: The methods described in the paper exploit the specific problem structure for the reduction of an optimization problem to a sequence of related but simpler and more easily solvable optimization problems.