scispace - formally typeset
Journal ArticleDOI

Likelihood ratio gradient estimation for stochastic systems

Peter W. Glynn
- 01 Oct 1990 - 
- Vol. 33, Iss: 10, pp 75-84
TLDR
This article describes two important problems which motivate the study of efficient gradient estimation algorithms and presents the likelihood ratio gradient estimator in a general setting in which the essential idea is most transparent, and derives likelihood-ratio-gradient estimators for both time-homogeneous and non-time homogeneous discrete-time Markov chains.
Abstract
Consider a computer system having a CPU that feeds jobs to two input/output (I/O) devices having different speeds. Let t be the fraction of jobs routed to the first I/O device, so that 1 - t is the fraction routed to the second. Suppose that a = a(t) is the steady-sate amount of time that a job spends in the system. Given that t is a decision variable, a designer might wish to minimize a(t) over t. Since a(·) is typically difficult to evaluate analytically, Monte Carlo optimization is an attractive methodology. By analogy with deterministic mathematical programming, efficient Monte Carlo gradient estimation is an important ingredient of simulation-based optimization algorithms. As a consequence, gradient estimation has recently attracted considerable attention in the simulation community. It is our goal, in this article, to describe one efficient method for estimating gradients in the Monte Carlo setting, namely the likelihood ratio method (also known as the efficient score method). This technique has been previously described (in less general settings than those developed in this article) in [6, 16, 18, 21]. An alternative gradient estimation procedure is infinitesimal perturbation analysis; see [11, 12] for an introduction. While it is typically more difficult to apply to a given application than the likelihood ratio technique of interest here, it often turns out to be statistically more accurate.In this article, we first describe two important problems which motivate our study of efficient gradient estimation algorithms. Next, we will present the likelihood ratio gradient estimator in a general setting in which the essential idea is most transparent. The section that follows then specializes the estimator to discrete-time stochastic processes. We derive likelihood-ratio-gradient estimators for both time-homogeneous and non-time homogeneous discrete-time Markov chains. Later, we discuss likelihood ratio gradient estimation in continuous time. As examples of our analysis, we present the gradient estimators for time-homogeneous continuous-time Markov chains; non-time homogeneous continuous-time Markov chains; semi-Markov processes; and generalized semi-Markov processes. (The analysis throughout these sections assumes the performance measure that defines a(t) corresponds to a terminating simulation.) Finally, we conclude the article with a brief discussion of the basic issues that arise in extending the likelihood ratio gradient estimator to steady-state performance measures.

read more

Citations
More filters
Proceedings Article

Seqgan: sequence generative adversarial nets with policy gradient

TL;DR: SeqGAN as mentioned in this paper models the data generator as a stochastic policy in reinforcement learning (RL), and the RL reward signal comes from the discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search.
Journal ArticleDOI

A brief survey of deep reinforcement learning

TL;DR: This survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic, and highlight the unique advantages of deep neural networks, focusing on visual understanding via RL.
Posted Content

Categorical Reparameterization with Gumbel-Softmax

TL;DR: It is shown that the Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification.
Proceedings Article

The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables

TL;DR: The Concrete distribution as mentioned in this paper is a new family of distributions with closed form densities and a simple reparameterization, which enables optimizing large scale stochastic computation graphs via gradient descent.
Book

Algorithms for Reinforcement Learning

TL;DR: This book focuses on those algorithms of reinforcement learning that build on the powerful theory of dynamic programming, and gives a fairly comprehensive catalog of learning problems, and describes the core ideas, followed by the discussion of their theoretical properties and limitations.
References
More filters
Book

Monte Carlo methods

TL;DR: The general nature of Monte Carlo methods can be found in this paper, where a short resume of statistical terms is given, including random, pseudorandom, and quasirandom numbers.
Journal ArticleDOI

Importance sampling for stochastic simulations

TL;DR: Applications are given to a GI/G/1 queueing problem and response surface estimation and Computation of the theoretical moments arising in importance sampling is discussed and some numerical examples given.
Journal ArticleDOI

Sensitivity Analysis for Simulations via Likelihood Ratios

TL;DR: In this article, a simple method of estimating the sensitivity of quantities obtained from simulation with respect to a class of parameters is presented, where sensitivity is defined as the derivative of the expectation of an expectation with respect a parameter.