scispace - formally typeset
Search or ask a question

Showing papers on "Local search (optimization) published in 1970"


Proceedings ArticleDOI
01 Dec 1970
TL;DR: It is shown that the conclusion of Khas'minskii is not correct, and that in fact the operation of the stochastic gradient search is the same as theoperation of the noise-free gradient search, and the search with noise converges with probability one to any of the local minima in the same way as thenoise-free search converges to a local minimum.
Abstract: The use of a gradient search algorithm for the computation of the minimum of a function is well understood. In particular, in continuous time, the algorithm can be formulated as a differential equation whose equilibrium solution is a local minimum of the function. Khas'minskii has proposed using the continuous gradient algorithm with a white-noise driving term. He shows, using an argument on the convergence of the probability density function, that the equilibrium solution of this differential equation is the global minimum of the function. This paper reviews that result from the point of view of the theory of diffusion processes. It is shown that the conclusion of Khas'minskii is not correct, and that in fact the operation of the stochastic gradient search is the same as the operation of the noise-free gradient search. In fact, the search with noise converges with probability one to any of the local minima in the same way as the noise-free search converges to a local minimum. Several stochastic adaptive control systems use random search algorithms for their operation. One of these, due to Barron, is analyzed to show that it meets the conditions for convergence imposed by the results which have been derived here.

225 citations


01 Jun 1970
TL;DR: The convergence properties of several direct search optimization methods are studied experimentally and ways in these convergence properties can be improved are presented.
Abstract: : The study is concerned with the problem of optimizing performance of a system with respect to a set of parameters. The mathematical relation between these parameters and the system performance is unknown so that indirect optimization method are not applicable. It is assumed that the system performance can be determined, at least approximately, for any set of the parameters. The convergence properties of several direct search optimization methods are studied experimentally and ways in these convergence properties can be improved are presented. The adaptive random optimization method is modified to improve its convergence properties for application to unimodal surfaces. The convergence properties of this method are compared to those of the stochastic approximation method. The adaptive random optimization method and the stochastic automaton method are modified to improve their convergence properties for application to multimodal surfaces. The convergence properties of these methods are compared to those of the standard stochastic automaton method, the concurrent global and local search method and the multimodal stochastic approximation method. Pattern recognition techniques are used to extend the applicability of the adaptive random optimization method and the stochastic automaton method to switching environments. The convergence properties of these methods are compared to those of the stochastic automaton method without pattern recognition. (Author)

4 citations


Book ChapterDOI
01 Jan 1970
TL;DR: The optimal algorithm for the solution of problems in the class B is defined as the algorithm ~* which gives the following minimax approach.
Abstract: Many computational algorithms, processes of planning experiments and so on, can be considered as controllable processes. Thus it is natural to pose a problem of devising the optimal (that is, in some sense, the best) computational algorithm for solving of problems of some class. Let B denote a class of problems to be solved, and 6 ~ B is an individual problem in that class. Let A denote a class of algorithms which can be applied for solution of an arbitrary problem from the class B , and let ~ A denote an individual algorithm in this class. Applying an algorighm a E A to a problem ~ ~ B , we can estimate the result by means of a function Q(~,~) , which is representative of the quality of the chosen algorithm a for the solution of a specific problem 2 . This function Q(~,~) is defined for all ~ ~ A and ~ ~ B and can express, for example, the error of solution, the expenditure of labour or computer time for obtaining the desired accuracy and so on. Choosing the algorithm we want to minimize the estimate function Q . As before the solution the specific properties of the problem B are usually unknown, then it is reasonable to apply a minimax approach. We shall define the optimal algorithm in the class A for the solution of problems in the class B as the algorithm ~* which gives the following minimax

1 citations