scispace - formally typeset
Open AccessJournal ArticleDOI

A survey of multi-objective sequential decision-making

TLDR
This article surveys algorithms designed for sequential decision-making problems with multiple objectives and proposes a taxonomy that classifies multi-objective methods according to the applicable scenario, the nature of the scalarization function, and the type of policies considered.
Abstract
Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-objective problems. Therefore, we identify three distinct scenarios in which converting such a problem to a single-objective one is impossible, infeasible, or undesirable. Furthermore, we propose a taxonomy that classifies multi-objective methods according to the applicable scenario, the nature of the scalarization function (which projects multi-objective values to scalar ones), and the type of policies considered. We show how these factors determine the nature of an optimal solution, which can be a single policy, a convex hull, or a Pareto front. Using this taxonomy, we survey the literature on multi-objective methods for planning and learning. Finally, we discuss key applications of such methods and outline opportunities for future work.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

Multi-objective mdps with conditional lexicographic reward preferences

TL;DR: A rich model called Lexicographic MDP (LMDP) and a corresponding planning algorithm called LVI that generalize previous work by allowing for conditional lexicographic preferences with slack are introduced and the convergence characteristics of LVI are analyzed.
Proceedings ArticleDOI

Policy gradient approaches for multi-objective sequential decision making

TL;DR: Two different Multi-Objective Reinforcement-Learning (MORL) approaches that, starting from an initial policy, perform gradient-based policy-search procedures aimed at finding a set of non-dominated policies are presented and compared to state-of-the-art MORL algorithms on three MORL benchmark problems.
Proceedings ArticleDOI

Model-based multi-objective reinforcement learning

TL;DR: This paper has supplied the agent with two different exploration strategies and compare their effectiveness in estimating accurate models within a reasonable amount of time, and results show that the method with the best exploration strategy is able to quickly learn all Pareto optimal policies for the Deep Sea Treasure problem.
Proceedings Article

Budget allocation using weakly coupled, constrained Markov decision processes

TL;DR: This work considers the problem of budget (or other resource) allocation in sequential decision problems involving a large number of concurrently running sub-processes, whose only interaction is through their consumption of budget, and introduces budgeted MDPs, an MDP model in which policies/values are a function of available budget.
Proceedings ArticleDOI

Multi-objective reinforcement learning-based deep neural networks for cognitive space communications

TL;DR: A novel hybrid radio resource allocation management control algorithm that integrates multi-objective reinforcement learning and deep artificial neural networks is proposed that enables on-line learning by interactions with the environment and restricts poor resource allocation performance through ‘virtual environment exploration’.
References
More filters
Book

Dynamic Programming

TL;DR: The more the authors study the information processing aspects of the mind, the more perplexed and impressed they become, and it will be a very long time before they understand these processes sufficiently to reproduce them.
Book

Markov Decision Processes: Discrete Stochastic Dynamic Programming

TL;DR: Puterman as discussed by the authors provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models, focusing primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous time discrete state models.
Book

Introduction to Reinforcement Learning

TL;DR: In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning.
Book

Evolutionary algorithms for solving multi-objective problems

TL;DR: This paper presents a meta-anatomy of the multi-Criteria Decision Making process, which aims to provide a scaffolding for the future development of multi-criteria decision-making systems.
Proceedings Article

Policy Gradient Methods for Reinforcement Learning with Function Approximation

TL;DR: This paper proves for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy.
Related Papers (5)