scispace - formally typeset
Open Access

Efficient Estimations from a Slowly Convergent Robbins-Monro Process

David Ruppert
Reads0
Chats0
About
The article was published on 1988-02-01 and is currently open access. It has received 381 citations till now. The article focuses on the topics: Industrial engineering and operations research & Process (engineering).

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Book

Reinforcement Learning: An Introduction

TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Posted Content

Adam: A Method for Stochastic Optimization

TL;DR: In this article, the adaptive estimates of lower-order moments are used for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimate of lowerorder moments.
Book ChapterDOI

Stochastic Gradient Descent Tricks

TL;DR: This chapter provides background material, explains why SGD is a good learning algorithm when the training set is large, and provides useful recommendations.
Journal ArticleDOI

The statistical evaluation of social network dynamics

TL;DR: A class of statistical models is proposed for longitudinal network data that are continuous-time Markov chain models that can be implemented as simulation models and statistical procedures are proposed that are based on the method of moments.
Related Papers (5)