scispace - formally typeset
Journal ArticleDOI

Asymptotic global behavior for stochastic approximation and diffusions with slowly decreasing noise effects: Global minimization via Monte Carlo

Harold J. Kushner
- 01 Mar 1987 - 
- Vol. 47, Iss: 1, pp 169-185
Reads0
Chats0
TLDR
In this article, the authors studied the asymptotic behavior of the systems where the objective function values can only be sampled via Monte Carlo, where the discrete algorithm is a combination of stochastic approximation and simulated annealing.
Abstract
The asymptotic behavior of the systems $X_{n + 1} = X_n + a_n b( {X_n ,\xi _n } ) + a_n \sigma ( X_n )\psi_n $ and $dy = \bar b( y )dt + \sqrt {a( t )} \sigma ( y )dw$ is studied, where $\{ {\psi _n } \}$ is i.i.d. Gaussian, $\{ \xi _n \}$ is a (correlated) bounded sequence of random variables and $a_n \approx A_0/\log (A_1 + n )$. Without $\{ \xi _n \}$, such algorithms are versions of the “simulated annealing” method for global optimization. When the objective function values can only be sampled via Monte Carlo, the discrete algorithm is a combination of stochastic approximation and simulated annealing. Our forms are appropriate. The $\{ \psi _n \}$ are the “annealing” variables, and $\{ \xi _n \}$ is the sampling noise. For large $A_0 $, a full asymptotic analysis is presented, via the theory of large deviations: Mean escape time (after arbitrary time n) from neighborhoods of stable sets of the algorithm, mean transition times (after arbitrary time n) from a neighborhood of one stable set to another, a...

read more

Citations
More filters
Book ChapterDOI

Simulated Annealing Algorithms for Continuous Global Optimization

TL;DR: The theoretical issue of convergence in probability to the set of global optima of the sequences of iterates and candidates will be explored and the Langevin equation and its discretized version are discussed.
Journal ArticleDOI

Metropolis-type annealing algorithms for global optimization in R d

TL;DR: In this article, the convergence of a class of Metropolis-type Markov chain annealing algorithms for global optimization of a smooth function on a bounded region is established, where no prior information is assumed as to what bounded region contains a global minimum.
Journal ArticleDOI

Learning processes in neural networks

TL;DR: The ensemble description allows us to study the asymptotic behavior of the plasticities for a large class of neural networks and derives an expression for the size of the fluctuations in an unchanging environment for small learning parameters.
Journal ArticleDOI

Simulated annealing simulated

TL;DR: In this article, the performance of simulated annealing methods for finding a global minimum point of a function is studied, where the authors consider the problem of finding the minimum point in a function.
Journal ArticleDOI

Weak convergence rates for stochastic approximation with application to multiple targets and simulated annealing

TL;DR: In this paper, the convergence rate of simulated annealing algorithms was studied in the case of multiple targets and simulated anealing, whose weak convergence to a distribution concentrated on the potential's minima had been established by Gelfand and Mitter or by Hwang and Sheu.
References
More filters
Journal ArticleDOI

Optimization by Simulated Annealing

TL;DR: There is a deep and useful connection between statistical mechanics and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters), and a detailed analogy with annealing in solids provides a framework for optimization of very large and complex systems.
Journal ArticleDOI

Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images

TL;DR: The analogy between images and statistical mechanics systems is made and the analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations, creating a highly parallel ``relaxation'' algorithm for MAP estimation.
Book

Large Deviations and Applications

TL;DR: The large deviation problem for empirical distributions of Markov Processes has been studied in this article, where it has been applied to the problem of the Wiener Sausage problem.
Journal ArticleDOI

Diffusions for global optimizations

TL;DR: In this article, the authors seek a global minimum of $U:[0, 1]^n \to R$, where R is the number of vertices in an n-dimensional (n-dimensional) Brownian motion.
Journal ArticleDOI

The averaging principle and theorems on large deviations

TL;DR: In this article, the average principle for stochastic differential equations is used to describe the behavior of the system over large time intervals, and the probability of large deviations from the averaged system is analyzed.
Related Papers (5)