scispace - formally typeset
Open AccessJournal ArticleDOI

Multivariate stochastic approximation using a simultaneous perturbation gradient approximation

James C. Spall
- 01 Mar 1992 - 
- Vol. 37, Iss: 3, pp 332-341
Reads0
Chats0
TLDR
The paper presents an SA algorithm that is based on a simultaneous perturbation gradient approximation instead of the standard finite-difference approximation of Keifer-Wolfowitz type procedures that can be significantly more efficient than the standard algorithms in large-dimensional problems.
Abstract
The problem of finding a root of the multivariate gradient equation that arises in function minimization is considered. When only noisy measurements of the function are available, a stochastic approximation (SA) algorithm for the general Kiefer-Wolfowitz type is appropriate for estimating the root. The paper presents an SA algorithm that is based on a simultaneous perturbation gradient approximation instead of the standard finite-difference approximation of Keifer-Wolfowitz type procedures. Theory and numerical experience indicate that the algorithm can be significantly more efficient than the standard algorithms in large-dimensional problems. >

read more

Content maybe subject to copyright    Report






Citations
More filters

Simulation Optimization: A Concise Overview and Implementation Guide

TL;DR: This tutorial provides a concise guide to the state of the art for solving a few key flavors of SO, and provides pointers to stable algorithmic implementations, and good entry points into the literature.
Proceedings ArticleDOI

Optimization over discrete sets via SPSA

TL;DR: A fixed gain version of the SPSA (simultaneous perturbation stochastic approximation) method for function minimization is developed and the error process is characterized.
Proceedings ArticleDOI

Training Supervised Speech Separation System to Improve STOI and PESQ Directly

TL;DR: Experimental results show the speech separation performance can be improved by the proposed method, and the calculated gradients are used in the gradient descent algorithm to optimize the STOI and PESQ directly.
Journal ArticleDOI

Global Stochastic Optimization with Low-Dispersion Point Sets

TL;DR: This study concerns a generic model-free stochastic optimization problem requiring the minimization of a risk function defined on a given bounded domain in a Euclidean space and establishes large-deviation type bounds of the minimizer in terms of sample size.
Journal ArticleDOI

Adaptive Newton-based multivariate smoothed functional algorithms for simulation optimization

TL;DR: This article presents three smoothed functional algorithms for simulation optimization that are adaptive Newton-based stochastic approximation algorithms that estimate both the gradient and Hessian and derives two unbiased SF-based estimators for the Hessian.
References
More filters
Journal ArticleDOI

Stochastic Estimation of the Maximum of a Regression Function

TL;DR: In this article, the authors give a scheme whereby, starting from an arbitrary point, one obtains successively $x_2, x_3, \cdots$ such that the regression function converges to the unknown point in probability as n \rightarrow \infty.
Journal ArticleDOI

Multidimensional Stochastic Approximation Methods

TL;DR: In this paper, a multidimensional stochastic approximation scheme is presented, and conditions are given for these schemes to converge a.s.p.s to the solutions of $k-stochastic equations in $k$ unknowns.
Journal ArticleDOI

Accelerated Stochastic Approximation

TL;DR: In this article, the Robbins-Monro procedure and the Kiefer-Wolfowitz procedure are considered, for which the magnitude of the $n$th step depends on the number of changes in sign in $(X_i - X_{i - 1})$ for n = 2, \cdots, n.