scispace - formally typeset
Open AccessJournal ArticleDOI

A survey of multi-objective sequential decision-making

TLDR
This article surveys algorithms designed for sequential decision-making problems with multiple objectives and proposes a taxonomy that classifies multi-objective methods according to the applicable scenario, the nature of the scalarization function, and the type of policies considered.
Abstract
Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-objective problems. Therefore, we identify three distinct scenarios in which converting such a problem to a single-objective one is impossible, infeasible, or undesirable. Furthermore, we propose a taxonomy that classifies multi-objective methods according to the applicable scenario, the nature of the scalarization function (which projects multi-objective values to scalar ones), and the type of policies considered. We show how these factors determine the nature of an optimal solution, which can be a single policy, a convex hull, or a Pareto front. Using this taxonomy, we survey the literature on multi-objective methods for planning and learning. Finally, we discuss key applications of such methods and outline opportunities for future work.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

A Multi-Objective Deep Reinforcement Learning Framework

TL;DR: In this article, a scalable multi-objective deep reinforcement learning (MODRL) framework based on deep Q-networks is proposed for solving increasingly complicated multiobjective problems.

Resource-constrained Multi-agent Markov Decision Processes

F. de Nijs
TL;DR: This thesis describes research into new algorithms for optimizing the behavior of agents operating in constrained environments, when these agents have significant uncertainty about the effects of their actions on their state and shows how agents can coordinate their actions under uncertainty and shared resource constraints in a broad range of conditions.
Journal ArticleDOI

Where Does Value Come From

TL;DR: It is suggested that this framework readily accounts for canonical phenomena observed in the fields of psychology, behavioural ecology, and economics, and recent findings from brain-imaging studies of value-guided decision-making.
Journal Article

ε-PAL: an active learning approach to the multi-objective optimization problem

TL;DR: e-PAL reduces the amount of computations and the number of samples from the design space required to meet the user's desired level of accuracy and improves significantly over a state-of-the-art multi-objective optimization method.
Journal ArticleDOI

Coordination of Electric Vehicle Charging Through Multiagent Reinforcement Learning

TL;DR: This article proposes the MultiAgent Selfish-COllaborative architecture (MASCO), a Multiagent Multiobjective Reinforcement Learning architecture that aims at simultaneously minimizing energy costs and avoiding transformer overloads, while allowing EV recharging.
References
More filters
Book

Dynamic Programming

TL;DR: The more the authors study the information processing aspects of the mind, the more perplexed and impressed they become, and it will be a very long time before they understand these processes sufficiently to reproduce them.
Book

Markov Decision Processes: Discrete Stochastic Dynamic Programming

TL;DR: Puterman as discussed by the authors provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models, focusing primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous time discrete state models.
Book

Introduction to Reinforcement Learning

TL;DR: In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning.
Book

Evolutionary algorithms for solving multi-objective problems

TL;DR: This paper presents a meta-anatomy of the multi-Criteria Decision Making process, which aims to provide a scaffolding for the future development of multi-criteria decision-making systems.
Proceedings Article

Policy Gradient Methods for Reinforcement Learning with Function Approximation

TL;DR: This paper proves for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy.
Related Papers (5)