scispace - formally typeset
Open AccessProceedings Article

Optimal Asset Allocation using Adaptive Dynamic Programming

Ralph Neuneier
- Vol. 8, pp 952-958
Reads0
Chats0
TLDR
Asset allocation is formalized as a Markovian Decision Problem which can be optimized by applying dynamic programming or reinforcement learning based algorithms and is shown to be equivalent to a policy computed by dynamic programming.
Abstract
In recent years, the interest of investors has shifted to computerized asset allocation (portfolio management) to exploit the growing dynamics of the capital markets. In this paper, asset allocation is formalized as a Markovian Decision Problem which can be optimized by applying dynamic programming or reinforcement learning based algorithms. Using an artificial exchange rate, the asset allocation strategy optimized with reinforcement learning (Q-Learning) is shown to be equivalent to a policy computed by dynamic programming. The approach is then tested on the task to invest liquid capital in the German stock market. Here, neural networks are used as value function approximators. The resulting asset allocation strategy is superior to a heuristic benchmark policy. This is a further example which demonstrates the applicability of neural network based reinforcement learning to a problem setting with a high dimensional state space.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Learning to trade via direct reinforcement

TL;DR: It is demonstrated how direct reinforcement can be used to optimize risk-adjusted investment returns (including the differential Sharpe ratio), while accounting for the effects of transaction costs.
Journal ArticleDOI

Performance functions and reinforcement learning for trading systems and portfolios

TL;DR: This paper proposed to train trading systems and portfolios by optimizing objective functions that directly measure trading and investment performance, such as profit or wealth, economic utility, the Sharpe ratio, and differential Sharpe ratios.
Posted Content

Practical Deep Reinforcement Learning Approach for Stock Trading

TL;DR: The proposed deep reinforcement learning approach is shown to outperform the two baselines in terms of both the Sharpe ratio and cumulative returns.
Book ChapterDOI

How to Train Neural Networks

TL;DR: The Observer - Observation Dilemma is solved by forcing the network to construct smooth approximation functions, and some pruning algorithms to optimize the network architecture are proposed.
Journal ArticleDOI

Deep Reinforcement Learning for Automated Stock Trading: An Ensemble Strategy

TL;DR: An ensemble strategy that employs deep reinforcement schemes to learn a stock trading strategy by maximizing investment return is proposed and shown to outperform the three individual algorithms and two baselines in terms of the risk-adjusted return measured by the Sharpe ratio.
References
More filters
Book

Dynamic Programming

TL;DR: The more the authors study the information processing aspects of the mind, the more perplexed and impressed they become, and it will be a very long time before they understand these processes sufficiently to reproduce them.

Learning to Solve Markovian Decision Processes

TL;DR: This dissertation establishes a novel connection between stochastic approximation theory and RL that provides a uniform framework for understanding all the different RL algorithms that have been proposed to date and highlights a dimension that clearly separates all RL research from prior work on DP.
Related Papers (5)