scispace - formally typeset
Journal ArticleDOI

Stochastic Approximation Algorithm eith Gradient Averaging and On-Line Stepsize Rules

A. Ruszczyński, +1 more
- 01 Jul 1984 - 
- Vol. 17, Iss: 2, pp 1023-1027
Reads0
Chats0
TLDR
A new practical stochastic approximation algorithm for finding unconstrained minima of smooth functions is described, which uses an auxillary filter which averages stochastically gradient estimates observed, thus producing directions for subsequent iterations.
About
This article is published in IFAC Proceedings Volumes.The article was published on 1984-07-01. It has received 7 citations till now. The article focuses on the topics: Adaptive stepsize & Stochastic approximation.

read more

Citations
More filters
Posted Content

Understanding the Role of Momentum in Stochastic Gradient Methods.

TL;DR: The general formulation of QHM is used to give a unified analysis of several popular algorithms, covering their asymptotic convergence conditions, stability regions, and properties of their stationary distributions, and sometimes counter-intuitive practical guidelines for setting the learning rate and momentum parameters.
Book ChapterDOI

A method of aggregate stochastic subgradients with on-line stepsize rules for convex stochastic programming problems

TL;DR: A new stochastic subgradient algorithm for solving convex Stochastic programming problems is described and Convergence with probability one is proved and numerical examples are described.
Journal ArticleDOI

On convergence of the stochastic subgradient method with on-line stepsize rules

TL;DR: In this paper, a stochastic subgradient method for solving convex convex Stochastic Programming problems is considered and on-line rules for determining stepsizes are derived from the concept of local regularized improvement functions.
Posted Content

Statistical Adaptive Stochastic Gradient Methods.

TL;DR: A statistical adaptive procedure called SALSA for automatically scheduling the learning rate (step size) in stochastic gradient methods, based on a new statistical test for detecting stationarity when using a constant step size.
Journal ArticleDOI

A method of stochastic subgradients with complete feedback stepsize rule for convex stochastic approximation problems

TL;DR: In the algorithm, the stepsize coefficients are controlled on-line on the basis of information gathered in the course of computations according to a new complete feedback rule derived from the concept of regularized improvement function.
Related Papers (5)