scispace - formally typeset
Open AccessJournal ArticleDOI

A survey of multi-objective sequential decision-making

TLDR
This article surveys algorithms designed for sequential decision-making problems with multiple objectives and proposes a taxonomy that classifies multi-objective methods according to the applicable scenario, the nature of the scalarization function, and the type of policies considered.
Abstract
Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-objective problems. Therefore, we identify three distinct scenarios in which converting such a problem to a single-objective one is impossible, infeasible, or undesirable. Furthermore, we propose a taxonomy that classifies multi-objective methods according to the applicable scenario, the nature of the scalarization function (which projects multi-objective values to scalar ones), and the type of policies considered. We show how these factors determine the nature of an optimal solution, which can be a single policy, a convex hull, or a Pareto front. Using this taxonomy, we survey the literature on multi-objective methods for planning and learning. Finally, we discuss key applications of such methods and outline opportunities for future work.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

A practical guide to multi-objective reinforcement learning and planning

TL;DR: In this article , a guide to the application of multi-objective decision-making methods to difficult problems is presented, aimed at researchers who are already familiar with singleobjective reinforcement learning and planning methods and who wish to adopt a multiobjective perspective on their research.
Proceedings Article

Offline Contextual Bandits with High Probability Fairness Guarantees

TL;DR: This work provides a theoretical analysis of RobinHood, an offline contextual bandit algorithm designed to satisfy a broad family of fairness constraints, and provides a proof that it will not return an unfair solution with probability greater than a user-specified threshold.
Journal ArticleDOI

Manifold-based multi-objective policy search with sample reuse

TL;DR: Two novel manifold-based algorithms to solve multi-objective Markov decision processes that combine episodic exploration strategies and importance sampling to efficiently learn a manifold in the policy parameter space such that its image in the objective space accurately approximates the Pareto frontier are presented.
Proceedings Article

Pareto Monte Carlo Tree Search for Multi-Objective Informative Planning

TL;DR: An anytime multi-objective informative planning method called Pareto Monte Carlo tree search which allows the robot to handle potentially competing objectives such as exploration versus exploitation and produces optimized decision solutions for the robot based on its knowledge of the environment state.
Journal ArticleDOI

Multi-objectivization and ensembles of shapings in reinforcement learning

TL;DR: The combination of multi-objectivization and ensemble techniques are argued for as a powerful tool to boost solving performance in reinforcement learning, injecting various pieces of heuristic information through reward shaping, creating several distinct enriched reward signals.
References
More filters
Book

Dynamic Programming

TL;DR: The more the authors study the information processing aspects of the mind, the more perplexed and impressed they become, and it will be a very long time before they understand these processes sufficiently to reproduce them.
Book

Markov Decision Processes: Discrete Stochastic Dynamic Programming

TL;DR: Puterman as discussed by the authors provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models, focusing primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous time discrete state models.
Book

Introduction to Reinforcement Learning

TL;DR: In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning.
Book

Evolutionary algorithms for solving multi-objective problems

TL;DR: This paper presents a meta-anatomy of the multi-Criteria Decision Making process, which aims to provide a scaffolding for the future development of multi-criteria decision-making systems.
Proceedings Article

Policy Gradient Methods for Reinforcement Learning with Function Approximation

TL;DR: This paper proves for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy.
Related Papers (5)