scispace - formally typeset
Open AccessJournal ArticleDOI

The complexity of dynamic programming

Reads0
Chats0
TLDR
Tight lower bounds are provided on the computational complexity of discretetime, stationary, infinite horizon, discounted stochastic control problems, for the case where the state space is continuous and the problem is to be solved approximately, within a specified accuracy.
About
This article is published in Journal of Complexity.The article was published on 1989-12-01 and is currently open access. It has received 73 citations till now. The article focuses on the topics: Dynamic problem & State space.

read more

Citations
More filters
Book

Algorithms for Reinforcement Learning

TL;DR: This book focuses on those algorithms of reinforcement learning that build on the powerful theory of dynamic programming, and gives a fairly comprehensive catalog of learning problems, and describes the core ideas, followed by the discussion of their theoretical properties and limitations.
Journal ArticleDOI

Survey A survey of computational complexity results in systems and control

TL;DR: This paper considers problems related to stability or stabilizability of linear systems with parametric uncertainty, robust control, time-varying linear systems, nonlinear and hybrid systems, and stochastic optimal control.
Journal Article

Finite-Time Bounds for Fitted Value Iteration

TL;DR: A theoretical analysis of the performance of sampling-based fitted value iteration (FVI) to solve infinite state-space, discounted-reward Markovian decision processes (MDPs) under the assumption that a generative model of the environment is available.
Journal ArticleDOI

Using randomization to break the curse of dimensionality

TL;DR: In this paper, random versions of successive approximations and multigrid algorithms for computing approximate solutions to a class of finite and infinite horizon Markovian decision problems (MDPs) were introduced.
Book ChapterDOI

Chapter 14 Numerical dynamic programming in economics

TL;DR: This chapter explores the numerical methods for solving dynamic programming (DP) problems and focuses on continuous Markov decision processes (MDPs) because these problems arise frequently in economic applications.
References
More filters
Book

Problem complexity and method efficiency in optimization

TL;DR: In this article, problem complexity and method efficiency in optimisation are discussed in terms of problem complexity, method efficiency, and method complexity in the context of OO optimization, respectively.
Book

Stochastic optimal control : the discrete time case

TL;DR: This research monograph is the authoritative and comprehensive treatment of the mathematical foundations of stochastic optimal control of discrete-time systems, including thetreatment of the intricate measure-theoretic issues.
Journal ArticleDOI

The Complexity of Markov Decision Processes

TL;DR: All three variants of the classical problem of optimal policy computation in Markov decision processes, finite horizon, infinite horizon discounted, and infinite horizon average cost are shown to be complete for P, and therefore most likely cannot be solved by highly parallel algorithms.
Book

Dynamic Programming: Deterministic and Stochastic Models

TL;DR: As one of the part of book categories, dynamic programming deterministic and stochastic models always becomes the most wanted book.
Book

Information-Based Complexity

TL;DR: This book provides a comprehensive treatment of information-based complexity, the branch of computational complexity that deals with the intrinsic difficulty of the approximate solution of problems for which the information is partial, noisy, and priced.