Open AccessProceedings Article
Non-parametric Approximate Dynamic Programming via the Kernel Method
Nikhil Bhat,Vivek F. Farias,Ciamac C. Moallemi +2 more
- Vol. 25, pp 386-394
Reads0
Chats0
TLDR
A novel non-parametric approximate dynamic programming (ADP) algorithm that enjoys graceful approximation and sample complexity guarantees and can serve as a viable alternative to state-of-the-art parametric ADP algorithms.Abstract:
This paper presents a novel non-parametric approximate dynamic programming (ADP) algorithm that enjoys graceful approximation and sample complexity guarantees. In particular, we establish both theoretically and computationally that our proposal can serve as a viable alternative to state-of-the-art parametric ADP algorithms, freeing the designer from carefully specifying an approximation architecture. We accomplish this by developing a kernel-based mathematical program for ADP. Via a computational study on a controlled queueing network, we show that our procedure is competitive with parametric ADP approaches.read more
Citations
More filters
Posted Content
Q-learning with Nearest Neighbors
Devavrat Shah,Qiaomin Xie +1 more
TL;DR: In this article, the authors considered a model-free reinforcement learning for infinite-horizon discounted Markov Decision Processes (MDPs) with a continuous state space and unknown transition kernel, and provided tight finite sample analysis of the convergence rate.
Journal ArticleDOI
Practical kernel-based reinforcement learning
TL;DR: An algorithm that turns KBRL into a practical reinforcement learning tool that significantly outperforms other state-of-the-art reinforcement learning algorithms on the tasks studied and derive upper bounds for the distance between the value functions computed by KBRL and KBSF using the same data.
Journal ArticleDOI
A comparison of Monte Carlo tree search and rolling horizon optimization for large-scale dynamic resource allocation problems
TL;DR: This paper adapt MCTS and RHO to two problems – a problem inspired by tactical wildfire management and a classical problem involving the control of queueing networks – and undertake an extensive computational study comparing the two methods on large scale instances of both problems in terms of both the state and the action spaces.
Journal ArticleDOI
Multi-period portfolio selection using kernel-based control policy with dimensionality reduction
Yuichi Takano,Jun-ya Gotoh +1 more
TL;DR: Numerical experiments show that the nonlinear control policy implemented in this paper works not only to reduce the computation time, but also to improve out-of-sample investment performance.
Journal ArticleDOI
Shape Constraints in Economics and Operations Research
TL;DR: This paper briefly reviews an illustrative set of research utilizing shape constraints in the economics and operations research literature and highlights the methodological innovations and applications with a particular emphasis on utility functions, production economics and sequential decision making applications.
References
More filters
Book
Making large-scale support vector machine learning practical
TL;DR: This chapter presents algorithmic and computational results developed for SV M light V2.0, which make large-scale SVM training more practical and give guidelines for the application of SVMs to large domains.
Proceedings ArticleDOI
An improved training algorithm for support vector machines
TL;DR: This paper presents a decomposition algorithm that is guaranteed to solve the QP problem and that does not make assumptions on the expected number of support vectors.
Journal Article
Tree-Based Batch Mode Reinforcement Learning
TL;DR: Within this framework, several classical tree-based supervised learning methods and two newly proposed ensemble algorithms, namely extremely and totally randomized trees, are described and found that the ensemble methods based on regression trees perform well in extracting relevant information about the optimal control policy from sets of four-tuples.
Journal ArticleDOI
The Linear Programming Approach to Approximate Dynamic Programming
TL;DR: In this article, an efficient method based on linear programming for approximating solutions to large-scale stochastic control problems is proposed. But the approach is not suitable for large scale queueing networks.