scispace - formally typeset
Open AccessJournal ArticleDOI

A survey of multi-objective sequential decision-making

TLDR
This article surveys algorithms designed for sequential decision-making problems with multiple objectives and proposes a taxonomy that classifies multi-objective methods according to the applicable scenario, the nature of the scalarization function, and the type of policies considered.
Abstract
Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-objective problems. Therefore, we identify three distinct scenarios in which converting such a problem to a single-objective one is impossible, infeasible, or undesirable. Furthermore, we propose a taxonomy that classifies multi-objective methods according to the applicable scenario, the nature of the scalarization function (which projects multi-objective values to scalar ones), and the type of policies considered. We show how these factors determine the nature of an optimal solution, which can be a single policy, a convex hull, or a Pareto front. Using this taxonomy, we survey the literature on multi-objective methods for planning and learning. Finally, we discuss key applications of such methods and outline opportunities for future work.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI

Multiple criteria decision making

TL;DR: In this Chapter, a decision maker (or a group of experts) trying to establish or examine fair procedures to combine opinions about alternatives related to different points of view is imagined.
Posted Content

Deep Reinforcement Learning: An Overview

Yuxi Li
- 25 Jan 2017 - 
TL;DR: This work discusses core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration, and important mechanisms for RL, including attention and memory, unsupervised learning, transfer learning, multi-agent RL, hierarchical RL, and learning to learn.
Journal ArticleDOI

Deep Reinforcement Learning for Autonomous Driving: A Survey

TL;DR: This review summarises deep reinforcement learning algorithms, provides a taxonomy of automated driving tasks where (D)RL methods have been employed, highlights the key challenges algorithmically as well as in terms of deployment of real world autonomous driving agents, the role of simulators in training agents, and finally methods to evaluate, test and robustifying existing solutions in RL and imitation learning.
Posted Content

Multi-Task Learning as Multi-Objective Optimization

TL;DR: This paper cast multi-task learning as a multi-objective optimization problem, with the overall objective of finding a Pareto optimal solution, and propose an upper bound for the multiobjective loss and show that it can be optimized efficiently.
Posted Content

Challenges of Real-World Reinforcement Learning

TL;DR: A set of nine unique challenges that must be addressed to productionize RL to real world problems are presented and an example domain that has been modified to present these challenges as a testbed for practical RL research is presented.
References
More filters
Proceedings ArticleDOI

Distributed W-Learning: Multi-Policy Optimization in Self-Organizing Systems

TL;DR: Distributed W-Learning is a reinforcement learning (RL)-based algorithm for collaborative agent-based self-optimization towards multiple policies, which relies only on local interactions and learning and can improve the performance of multiple policies deployed simultaneously, even over corresponding single-policy deployments.
Journal ArticleDOI

Explicit temporal models for decision-theoretic planning of clinical management.

TL;DR: This article explores the suitability of partially observable Markov decision processes to formalising the planning of clinical management, and how probabilistic network representations can be used to alleviate obstacles.
Journal ArticleDOI

Reinforcement Learning for Call Admission Control and Routing under Quality of Service Constraints in Multimedia Networks

TL;DR: This paper shows that RL provides a solution to the call admission control and routing problem in multimedia networks via reinforcement learning and is able to earn significantly higher revenues than alternative heuristics.
Proceedings Article

The Steering Approach for Multi-Criteria Reinforcement Learning

TL;DR: An algorithm for achieving this task, which is based on the theory of approachability for stochastic games, is devised, in an appropriate way, a finite set of standard, scalar-reward learning algorithms.
Posted Content

Two Views on Multiple Mean-Payoff Objectives in Markov Decision Processes

TL;DR: It is proved that the decision problems for both expectation and satisfaction objectives can be solved in polynomial time and the trade-off curve (Pareto curve) can be epsilon-approximated in timePolynomial in the size of the MDP and 1/epsilon, and exponential in the number of reward functions, for all epsilus>0.
Related Papers (5)