scispace - formally typeset
Open AccessJournal ArticleDOI

A survey of multi-objective sequential decision-making

TLDR
This article surveys algorithms designed for sequential decision-making problems with multiple objectives and proposes a taxonomy that classifies multi-objective methods according to the applicable scenario, the nature of the scalarization function, and the type of policies considered.
Abstract
Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-objective problems. Therefore, we identify three distinct scenarios in which converting such a problem to a single-objective one is impossible, infeasible, or undesirable. Furthermore, we propose a taxonomy that classifies multi-objective methods according to the applicable scenario, the nature of the scalarization function (which projects multi-objective values to scalar ones), and the type of policies considered. We show how these factors determine the nature of an optimal solution, which can be a single policy, a convex hull, or a Pareto front. Using this taxonomy, we survey the literature on multi-objective methods for planning and learning. Finally, we discuss key applications of such methods and outline opportunities for future work.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

On the Practical Art of State Definitions for Markov Decision Process Construction

TL;DR: This paper focuses on this first step in the MDP modeling process, an often neglected and difficult step, and discusses the practical implications and issues associated with the state definition, illustrating these issues with numerous examples.
Journal ArticleDOI

A Reinforcement Learning based evolutionary multi-objective optimization algorithm for spectrum allocation in Cognitive Radio networks

TL;DR: This paper addresses the spectrum allocation problem concerning network capacity and spectrum efficiency as conflicting objectives and model the scenario as a multi-objective optimization problem in CR networks and proposes an improved version of the Non-dominated Sorting Genetic Algorithm which incorporates a self-tuning parameter approach to handle multiple conflicting objectives.
Journal ArticleDOI

Learning adversarial attack policies through multi-objective reinforcement learning

TL;DR: A novel modelization of the process of learning an attack policy as a Multi-objective Markov Decision Process with two objectives: maximizing the performance loss of the attacked policy and minimizing the cost of the attacks.
Journal ArticleDOI

Opponent learning awareness and modelling in multi-objective normal form games

TL;DR: In this paper, the effects of opponent learning awareness on multi-objective multi-agent interactions with nonlinear utilities were studied and the actor-critic and policy gradient formulations were extended to allow reinforcement learning of mixed strategies in this setting.
Proceedings ArticleDOI

Agent Coordination in Air Combat Simulation using Multi-Agent Deep Reinforcement Learning

TL;DR: This work empirically evaluate a number of approaches in two air combat scenarios, and demonstrates that curriculum learning is a promising approach for handling the high-dimensional state space of the air combat domain, and that multi-objective learning can produce synthetic agents with diverse characteristics, which can stimulate human pilots in training.
References
More filters
Book

Dynamic Programming

TL;DR: The more the authors study the information processing aspects of the mind, the more perplexed and impressed they become, and it will be a very long time before they understand these processes sufficiently to reproduce them.
Book

Markov Decision Processes: Discrete Stochastic Dynamic Programming

TL;DR: Puterman as discussed by the authors provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models, focusing primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous time discrete state models.
Book

Introduction to Reinforcement Learning

TL;DR: In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning.
Book

Evolutionary algorithms for solving multi-objective problems

TL;DR: This paper presents a meta-anatomy of the multi-Criteria Decision Making process, which aims to provide a scaffolding for the future development of multi-criteria decision-making systems.
Proceedings Article

Policy Gradient Methods for Reinforcement Learning with Function Approximation

TL;DR: This paper proves for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy.
Related Papers (5)