Multivariate stochastic approximation using a simultaneous perturbation gradient approximation
Reads0
Chats0
TLDR
The paper presents an SA algorithm that is based on a simultaneous perturbation gradient approximation instead of the standard finite-difference approximation of Keifer-Wolfowitz type procedures that can be significantly more efficient than the standard algorithms in large-dimensional problems.Abstract:
The problem of finding a root of the multivariate gradient equation that arises in function minimization is considered. When only noisy measurements of the function are available, a stochastic approximation (SA) algorithm for the general Kiefer-Wolfowitz type is appropriate for estimating the root. The paper presents an SA algorithm that is based on a simultaneous perturbation gradient approximation instead of the standard finite-difference approximation of Keifer-Wolfowitz type procedures. Theory and numerical experience indicate that the algorithm can be significantly more efficient than the standard algorithms in large-dimensional problems. >read more
Citations
More filters
Proceedings ArticleDOI
Stochastic gradient estimation using a single design point
TL;DR: Compared to the only existing single design-point method, the proposed gradient estimator is advantageous in that its variance is not dependent on the magnitude of the response surface at the design point of interest and also decreases as the simulation run length increases.
Journal ArticleDOI
Simultaneous perturbation optimization for efficient image restoration
J.L. Maryak,James C. Spall +1 more
TL;DR: This study implements a similar estimation technique based on a new optimization method (simultaneous perturbation stochastic approximation) that suggests that SPSA is a strong candidate for use in image restoration.
Journal ArticleDOI
Distributed Randomized Gradient-Free Mirror Descent Algorithm for Constrained Optimization
TL;DR: In this paper , a distributed randomized gradient-free mirror descent (DRGFMD) method is developed by introducing a randomized gradient free oracle in the mirror descent scheme where the non-Euclidean Bregman divergence is used.
Proceedings ArticleDOI
Optimization and scheduling for automotive powertrains
TL;DR: In this article, the authors present an integrated approach intended to simplify the tasks of finding optimal settings for the engine parameters and implementing the optimal schedules in the engine control unit more difficult.
Journal ArticleDOI
Medical image registration using stochastic optimization
Waleed Mohamed,A. Ben Hamza +1 more
TL;DR: An image registration method is proposed by maximizing a Tsallis entropy-based divergence using a modified simultaneous perturbation stochastic approximation algorithm to demonstrate the registration accuracy of the proposed approach in comparison to existing entropic image alignment techniques.
References
More filters
Journal ArticleDOI
An Introduction to Probability Theory and Its Applications.
Journal ArticleDOI
Stochastic Estimation of the Maximum of a Regression Function
J. Kiefer,Jacob Wolfowitz +1 more
TL;DR: In this article, the authors give a scheme whereby, starting from an arbitrary point, one obtains successively $x_2, x_3, \cdots$ such that the regression function converges to the unknown point in probability as n \rightarrow \infty.
Journal ArticleDOI
Multidimensional Stochastic Approximation Methods
TL;DR: In this paper, a multidimensional stochastic approximation scheme is presented, and conditions are given for these schemes to converge a.s.p.s to the solutions of $k-stochastic equations in $k$ unknowns.
Journal ArticleDOI
Accelerated Stochastic Approximation
TL;DR: In this article, the Robbins-Monro procedure and the Kiefer-Wolfowitz procedure are considered, for which the magnitude of the $n$th step depends on the number of changes in sign in $(X_i - X_{i - 1})$ for n = 2, \cdots, n.