scispace - formally typeset
Search or ask a question
Proceedings Article

Non-parametric Approximate Dynamic Programming via the Kernel Method

TL;DR: A novel non-parametric approximate dynamic programming (ADP) algorithm that enjoys graceful approximation and sample complexity guarantees and can serve as a viable alternative to state-of-the-art parametric ADP algorithms.
Abstract: This paper presents a novel non-parametric approximate dynamic programming (ADP) algorithm that enjoys graceful approximation and sample complexity guarantees. In particular, we establish both theoretically and computationally that our proposal can serve as a viable alternative to state-of-the-art parametric ADP algorithms, freeing the designer from carefully specifying an approximation architecture. We accomplish this by developing a kernel-based mathematical program for ADP. Via a computational study on a controlled queueing network, we show that our procedure is competitive with parametric ADP approaches.

Content maybe subject to copyright    Report

Citations
More filters
Posted Content
TL;DR: In this article, the authors considered a model-free reinforcement learning for infinite-horizon discounted Markov Decision Processes (MDPs) with a continuous state space and unknown transition kernel, and provided tight finite sample analysis of the convergence rate.
Abstract: We consider model-free reinforcement learning for infinite-horizon discounted Markov Decision Processes (MDPs) with a continuous state space and unknown transition kernel, when only a single sample path under an arbitrary policy of the system is available. We consider the Nearest Neighbor Q-Learning (NNQL) algorithm to learn the optimal Q function using nearest neighbor regression method. As the main contribution, we provide tight finite sample analysis of the convergence rate. In particular, for MDPs with a $d$-dimensional state space and the discounted factor $\gamma \in (0,1)$, given an arbitrary sample path with "covering time" $ L $, we establish that the algorithm is guaranteed to output an $\varepsilon$-accurate estimate of the optimal Q-function using $\tilde{O}\big(L/(\varepsilon^3(1-\gamma)^7)\big)$ samples. For instance, for a well-behaved MDP, the covering time of the sample path under the purely random policy scales as $ \tilde{O}\big(1/\varepsilon^d\big),$ so the sample complexity scales as $\tilde{O}\big(1/\varepsilon^{d+3}\big).$ Indeed, we establish a lower bound that argues that the dependence of $ \tilde{\Omega}\big(1/\varepsilon^{d+2}\big)$ is necessary.

43 citations

Journal ArticleDOI
TL;DR: An algorithm that turns KBRL into a practical reinforcement learning tool that significantly outperforms other state-of-the-art reinforcement learning algorithms on the tasks studied and derive upper bounds for the distance between the value functions computed by KBRL and KBSF using the same data.
Abstract: Kernel-based reinforcement learning (KBRL) stands out among approximate reinforcement learning algorithms for its strong theoretical guarantees. By casting the learning problem as a local kernel approximation, KBRL provides a way of computing a decision policy which converges to a unique solution and is statistically consistent. Unfortunately, the model constructed by KBRL grows with the number of sample transitions, resulting in a computational cost that precludes its application to large-scale or on-line domains. In this paper we introduce an algorithm that turns KBRL into a practical reinforcement learning tool. Kernel-based stochastic factorization (KBSF) builds on a simple idea: when a transition probability matrix is represented as the product of two stochastic matrices, one can swap the factors of the multiplication to obtain another transition matrix, potentially much smaller than the original, which retains some fundamental properties of its precursor. KBSF exploits such an insight to compress the information contained in KBRL's model into an approximator of fixed size. This makes it possible to build an approximation considering both the difficulty of the problem and the associated computational cost. KBSF's computational complexity is linear in the number of sample transitions, which is the best one can do without discarding data. Moreover, the algorithm's simple mechanics allow for a fully incremental implementation that makes the amount of memory used independent of the number of sample transitions. The result is a kernel-based reinforcement learning algorithm that can be applied to large-scale problems in both off-line and on-line regimes. We derive upper bounds for the distance between the value functions computed by KBRL and KBSF using the same data. We also prove that it is possible to control the magnitude of the variables appearing in our bounds, which means that, given enough computational resources, we can make KBSF's value function as close as desired to the value function that would be computed by KBRL using the same set of sample transitions. The potential of our algorithm is demonstrated in an extensive empirical study in which KBSF is applied to difficult tasks based on real-world data. Not only does KBSF solve problems that had never been solved before, but it also significantly outperforms other state-of-the-art reinforcement learning algorithms on the tasks studied.

37 citations


Cites background from "Non-parametric Approximate Dynamic ..."

  • ...Following a slightly different line of work, Bhat et al. (2012) propose to kernelize the linear programming formulation of dynamic programming....

    [...]

Journal ArticleDOI
TL;DR: This paper adapt MCTS and RHO to two problems – a problem inspired by tactical wildfire management and a classical problem involving the control of queueing networks – and undertake an extensive computational study comparing the two methods on large scale instances of both problems in terms of both the state and the action spaces.

29 citations

Journal ArticleDOI
TL;DR: Numerical experiments show that the nonlinear control policy implemented in this paper works not only to reduce the computation time, but also to improve out-of-sample investment performance.
Abstract: This paper studies a nonlinear control policy for multi-period investment. The nonlinear strategy we implement is categorized as a kernel method, but solving large-scale instances of the resulting optimization problem in a direct manner is computationally intractable in the literature. In order to overcome this difficulty, we employ a dimensionality reduction technique which is often used in principal component analysis. Numerical experiments show that our strategy works not only to reduce the computation time, but also to improve out-of-sample investment performance.

21 citations

Journal ArticleDOI
TL;DR: This paper briefly reviews an illustrative set of research utilizing shape constraints in the economics and operations research literature and highlights the methodological innovations and applications with a particular emphasis on utility functions, production economics and sequential decision making applications.
Abstract: Shape constraints, motivated by either application-specific assumptions or existing theory, can be imposed during model estimation to restrict the feasible region of the parameters. Although such restrictions may not provide any benefits in an asymptotic analysis, they often improve finite sample performance of statistical estimators and the computational efficiency of finding near-optimal control policies. This paper briefly reviews an illustrative set of research utilizing shape constraints in the economics and operations research literature. We highlight the methodological innovations and applications, with a particular emphasis on utility functions, production economics and sequential decision making applications.

21 citations


Cites methods from "Non-parametric Approximate Dynamic ..."

  • ...…Farias and Van Roy, 2000; Tsitsiklis and Roy, 1996; Tsitsiklis and Van Roy, 1999; Geramifard et al., 2013), approximate linear programming (De Farias and Van Roy, 2003; De Farias and Van Roy, 2004; Desai et al., 2012a), and nonparametric methods are used (Ormoneit and Sen, 2002; Bhat et al., 2012)....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this article, the authors used weak convergence methods to show that the sequence of optimal costs for the original network converges to the optimal cost for the workload limit problem for multiclass queueing networks.
Abstract: The workload formulation due to Harrison and coworkers of multiclass queueing networks has been fundamental to its analysis. Until recently, there was no actual theory which started with the physical queue and showed that under heavy traffic conditions, the optimal costs could be approximated by those for an optimization problem using the ``limit'' workload equations. Recently, this was done via viscosity solution methods by Martins, Shreve, and Soner for one important class. For this same class of problems (and including the cases not treated there), we use weak convergence methods to show that the sequence of optimal costs for the original network converges to the optimal cost for the workload limit problem. The proof is simpler and allows weaker (and non-Markovian) conditions. It uses current techniques in weak convergence analysis. It seems to be the first analysis of such multiclass ``workload'' problems by weak convergence methods. The general structure of the development seems applicable to the analysis of more complex systems.

33 citations


"Non-parametric Approximate Dynamic ..." refers background in this paper

  • ...This specific network has been studied by de Farias and Van Roy (2003); Chen and Meyn (1998); Kumar and Seidman (1990), for example, and closely related networks have been studied by Harrison and Wein (1989); Kushner and Martins (1996); Martins et al. (1996); Kumar and Muthuraman (2004)....

    [...]

Proceedings Article
07 Aug 2011
TL;DR: This paper starts with smoothness and develops a non-parametric approach to ALP that is consistent with the smoothness assumption and shows that this new approach has some favorable practical and analytical properties in comparison to (R)ALP.
Abstract: The Approximate Linear Programming (ALP) approach to value function approximation for MDPs is a parametric value function approximation method, in that it represents the value function as a linear combination of features which are chosen a priori. Choosing these features can be a difficult challenge in itself. One recent effort, Regularized Approximate Linear Programming (RALP), uses L1 regularization to address this issue by combining a large initial set of features with a regularization penalty that favors a smooth value function with few non-zero weights. Rather than using smoothness as a backhanded way of addressing the feature selection problem, this paper starts with smoothness and develops a non-parametric approach to ALP that is consistent with the smoothness assumption. We show that this new approach has some favorable practical and analytical properties in comparison to (R)ALP.

27 citations


"Non-parametric Approximate Dynamic ..." refers background or methods in this paper

  • ...Along these lines, Pazis and Parr (2011) discuss a non-parametric method that explicitly restricts the smoothness of the value function....

    [...]

  • ...Via a computational study on a controlled queueing network, we show that our non-parametric procedure outperforms the state of the art parametric ADP approaches and established heuristics....

    [...]