scispace - formally typeset
Search or ask a question
Proceedings Article

Non-parametric Approximate Dynamic Programming via the Kernel Method

TL;DR: A novel non-parametric approximate dynamic programming (ADP) algorithm that enjoys graceful approximation and sample complexity guarantees and can serve as a viable alternative to state-of-the-art parametric ADP algorithms.
Abstract: This paper presents a novel non-parametric approximate dynamic programming (ADP) algorithm that enjoys graceful approximation and sample complexity guarantees. In particular, we establish both theoretically and computationally that our proposal can serve as a viable alternative to state-of-the-art parametric ADP algorithms, freeing the designer from carefully specifying an approximation architecture. We accomplish this by developing a kernel-based mathematical program for ADP. Via a computational study on a controlled queueing network, we show that our procedure is competitive with parametric ADP approaches.

Content maybe subject to copyright    Report

Citations
More filters
Posted Content
TL;DR: In this article, the authors considered a model-free reinforcement learning for infinite-horizon discounted Markov Decision Processes (MDPs) with a continuous state space and unknown transition kernel, and provided tight finite sample analysis of the convergence rate.
Abstract: We consider model-free reinforcement learning for infinite-horizon discounted Markov Decision Processes (MDPs) with a continuous state space and unknown transition kernel, when only a single sample path under an arbitrary policy of the system is available. We consider the Nearest Neighbor Q-Learning (NNQL) algorithm to learn the optimal Q function using nearest neighbor regression method. As the main contribution, we provide tight finite sample analysis of the convergence rate. In particular, for MDPs with a $d$-dimensional state space and the discounted factor $\gamma \in (0,1)$, given an arbitrary sample path with "covering time" $ L $, we establish that the algorithm is guaranteed to output an $\varepsilon$-accurate estimate of the optimal Q-function using $\tilde{O}\big(L/(\varepsilon^3(1-\gamma)^7)\big)$ samples. For instance, for a well-behaved MDP, the covering time of the sample path under the purely random policy scales as $ \tilde{O}\big(1/\varepsilon^d\big),$ so the sample complexity scales as $\tilde{O}\big(1/\varepsilon^{d+3}\big).$ Indeed, we establish a lower bound that argues that the dependence of $ \tilde{\Omega}\big(1/\varepsilon^{d+2}\big)$ is necessary.

43 citations

Journal ArticleDOI
TL;DR: An algorithm that turns KBRL into a practical reinforcement learning tool that significantly outperforms other state-of-the-art reinforcement learning algorithms on the tasks studied and derive upper bounds for the distance between the value functions computed by KBRL and KBSF using the same data.
Abstract: Kernel-based reinforcement learning (KBRL) stands out among approximate reinforcement learning algorithms for its strong theoretical guarantees. By casting the learning problem as a local kernel approximation, KBRL provides a way of computing a decision policy which converges to a unique solution and is statistically consistent. Unfortunately, the model constructed by KBRL grows with the number of sample transitions, resulting in a computational cost that precludes its application to large-scale or on-line domains. In this paper we introduce an algorithm that turns KBRL into a practical reinforcement learning tool. Kernel-based stochastic factorization (KBSF) builds on a simple idea: when a transition probability matrix is represented as the product of two stochastic matrices, one can swap the factors of the multiplication to obtain another transition matrix, potentially much smaller than the original, which retains some fundamental properties of its precursor. KBSF exploits such an insight to compress the information contained in KBRL's model into an approximator of fixed size. This makes it possible to build an approximation considering both the difficulty of the problem and the associated computational cost. KBSF's computational complexity is linear in the number of sample transitions, which is the best one can do without discarding data. Moreover, the algorithm's simple mechanics allow for a fully incremental implementation that makes the amount of memory used independent of the number of sample transitions. The result is a kernel-based reinforcement learning algorithm that can be applied to large-scale problems in both off-line and on-line regimes. We derive upper bounds for the distance between the value functions computed by KBRL and KBSF using the same data. We also prove that it is possible to control the magnitude of the variables appearing in our bounds, which means that, given enough computational resources, we can make KBSF's value function as close as desired to the value function that would be computed by KBRL using the same set of sample transitions. The potential of our algorithm is demonstrated in an extensive empirical study in which KBSF is applied to difficult tasks based on real-world data. Not only does KBSF solve problems that had never been solved before, but it also significantly outperforms other state-of-the-art reinforcement learning algorithms on the tasks studied.

37 citations


Cites background from "Non-parametric Approximate Dynamic ..."

  • ...Following a slightly different line of work, Bhat et al. (2012) propose to kernelize the linear programming formulation of dynamic programming....

    [...]

Journal ArticleDOI
TL;DR: This paper adapt MCTS and RHO to two problems – a problem inspired by tactical wildfire management and a classical problem involving the control of queueing networks – and undertake an extensive computational study comparing the two methods on large scale instances of both problems in terms of both the state and the action spaces.

29 citations

Journal ArticleDOI
TL;DR: Numerical experiments show that the nonlinear control policy implemented in this paper works not only to reduce the computation time, but also to improve out-of-sample investment performance.
Abstract: This paper studies a nonlinear control policy for multi-period investment. The nonlinear strategy we implement is categorized as a kernel method, but solving large-scale instances of the resulting optimization problem in a direct manner is computationally intractable in the literature. In order to overcome this difficulty, we employ a dimensionality reduction technique which is often used in principal component analysis. Numerical experiments show that our strategy works not only to reduce the computation time, but also to improve out-of-sample investment performance.

21 citations

Journal ArticleDOI
TL;DR: This paper briefly reviews an illustrative set of research utilizing shape constraints in the economics and operations research literature and highlights the methodological innovations and applications with a particular emphasis on utility functions, production economics and sequential decision making applications.
Abstract: Shape constraints, motivated by either application-specific assumptions or existing theory, can be imposed during model estimation to restrict the feasible region of the parameters. Although such restrictions may not provide any benefits in an asymptotic analysis, they often improve finite sample performance of statistical estimators and the computational efficiency of finding near-optimal control policies. This paper briefly reviews an illustrative set of research utilizing shape constraints in the economics and operations research literature. We highlight the methodological innovations and applications, with a particular emphasis on utility functions, production economics and sequential decision making applications.

21 citations


Cites methods from "Non-parametric Approximate Dynamic ..."

  • ...…Farias and Van Roy, 2000; Tsitsiklis and Roy, 1996; Tsitsiklis and Van Roy, 1999; Geramifard et al., 2013), approximate linear programming (De Farias and Van Roy, 2003; De Farias and Van Roy, 2004; Desai et al., 2012a), and nonparametric methods are used (Ormoneit and Sen, 2002; Bhat et al., 2012)....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this paper, a linear superposition of M basis functions is proposed to fit the value function in a Markovian decision process by reducing the problem dimensionality from the number of states down to M.

385 citations


"Non-parametric Approximate Dynamic ..." refers background in this paper

  • ...This case is known as the approximate linear program (ALP), and was first proposed by Schweitzer and Seidman (1985). de Farias and Van Roy (2003) provided a pioneering analysis that, stated loosely, showed ‖J∗ − z∗>Φ‖1,ν ≤ 2 1− α infz ‖J ∗ − z>Φ‖∞, for an optimal solution z∗ to the ALP....

    [...]

  • ...Consider a discrete time Markov decision process with finite state space S and finite action space A....

    [...]

  • ...A policy is a map µ : S → A, so that Jµ(x) , Ex,µ [ ∞∑ t=0 αtgxt,at ] represents the expected (discounted, infinite horizon) cost-to-go under policy µ starting at state x, with the discount factor α ∈ (0, 1)....

    [...]

  • ...…J(x) ≤ ga,x + αEx,a[J(X ′)], ∀ x ∈ S, a ∈ A, J ∈ RS , for any strictly positive state-relevance weight vector ν ∈ RS+. Motivated by this, a series of ADP algorithms (Schweitzer and Seidman, 1985; de Farias and Van Roy, 2003; Desai et al., 2011) have been proposed that compute a weight vector z by…...

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate that cyclic material flow and certain distributed scheduling policies can lead to instability in the sense that the required buffer levels are unbounded, even when the set-up times for changing part types are zero.
Abstract: The paper concerns policies for sequencing material through a flexible manufacturing system to meet desired production goals for each part type. The authors demonstrate by examples that cyclic material flow and certain distributed scheduling policies can lead to instability in the sense that the required buffer levels are unbounded. This can be the case even when the set-up times for changing part types are zero. Sufficient conditions are then derived under which a class of distributed policies is stable. Finally, a general supervisory mechanism is presented which will stabilize any scheduling policy (i.e. maintain bounded buffer sizes at all machines) while satisfying the desired production rates. >

337 citations


"Non-parametric Approximate Dynamic ..." refers background in this paper

  • ...This specific network has been studied by de Farias and Van Roy (2003); Chen and Meyn (1998); Kumar and Seidman (1990), for example, and closely related networks have been studied by Harrison and Wein (1989); Kushner and Martins (1996); Martins et al. (1996); Kumar and Muthuraman (2004)....

    [...]

Journal ArticleDOI
TL;DR: The KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs) and can be applied to online learning control by incorporating an initial controller to ensure online performance.
Abstract: In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating an initial controller to ensure online performance.

279 citations


"Non-parametric Approximate Dynamic ..." refers methods in this paper

  • ...By substituting this parametric regression step with a suitable non-parametric regression procedure, Bethke et al. (2008), Engel et al. (2003), and Xu et al. (2007) come up with corresponding non-parametric algorithms....

    [...]

  • ...Via a computational study on a controlled queueing network, we show that our non-parametric procedure outperforms the state of the art parametric ADP approaches and established heuristics....

    [...]

Book ChapterDOI
16 Aug 2006
TL;DR: Two kernel-based reinforcement learning algorithms, the e – KRL and the least squares kernel based reinforcement learning (LS-KRL) are proposed and an example shows that the proposed methods can deal effectively with the reinforcement learning problem without having to explore many states.
Abstract: We consider the problem of approximating the cost-to-go functions in reinforcement learning By mapping the state implicitly into a feature space, we perform a simple algorithm in the feature space, which corresponds to a complex algorithm in the original state space Two kernel-based reinforcement learning algorithms, the e -insensitive kernel based reinforcement learning (e – KRL) and the least squares kernel based reinforcement learning (LS-KRL) are proposed An example shows that the proposed methods can deal effectively with the reinforcement learning problem without having to explore many states

258 citations


"Non-parametric Approximate Dynamic ..." refers background or methods in this paper

  • ...Similarly, Ernst et al. (2005) replace the local averaging procedure used for regression by Ormoneit and Sen (2002) with non-parametric regression procedures such as the tree-based learning methods....

    [...]

  • ...One then employs a policy that is greedy with respect to the corresponding approximation J̃ ....

    [...]

  • ...Another idea has been to use kernel-based local averaging ideas to approximate the solution of an MDP with that of a simpler variation on a sampled state space (e.g., Ormoneit and Sen, 2002; Ormoneit and Glynn, 2002; Barreto et al., 2011)....

    [...]

  • ...Via a computational study on a controlled queueing network, we show that our non-parametric procedure outperforms the state of the art parametric ADP approaches and established heuristics....

    [...]

Journal ArticleDOI
TL;DR: Under a mild assumption on network structure, it is proved that a network operating under a maximum pressure policy achieves maximum throughput predicted by LPs, and identifies a class of networks for which the nonpreemptive, non-processor-splitting version of amaximum pressure policy is still throughput optimal.
Abstract: Complex systems like semiconductor wafer fabrication facilities (fabs), networks of data switches, and large-scale call centers all demand efficient resource allocation. Deterministic models like linear programs (LP) have been used for capacity planning at both the design and expansion stages of such systems. LP-based planning is critical in setting a medium range or long-term goal for many systems, but it does not translate into a day-to-day operational policy that must deal with discreteness of jobs and the randomness of the processing environment.A stochastic processing network, advanced by J. Michael Harrison (2000, 2002, 2003), is a system that takes inputs of materials of various kinds and uses various processing resources to produce outputs of materials of various kinds. Such a network provides a powerful abstraction of a wide range of real-world systems. It provides high-fidelity stochastic models in diverse economic sectors including manufacturing, service, and information technology.We propose a family of maximum pressure service policies for dynamically allocating service capacities in a stochastic processing network. Under a mild assumption on network structure, we prove that a network operating under a maximum pressure policy achieves maximum throughput predicted by LPs. These policies are semilocal in the sense that each server makes its decision based on the buffer content in its serviceable buffers and their immediately downstream buffers. In particular, their implementation does not use arrival rate information, which is difficult to collect in many applications. We also identify a class of networks for which the nonpreemptive, non-processor-splitting version of a maximum pressure policy is still throughput optimal. Applications to queueing networks with alternate routes and networks of data switches are presented.

238 citations


"Non-parametric Approximate Dynamic ..." refers background in this paper

  • ...This policy has been extensively studied and shown to have a number of good properties, for example, being throughput optimal (Dai and Lin, 2005) and offering good performance for critically loaded settings (Stolyar, 2004)....

    [...]