scispace - formally typeset
Open AccessJournal ArticleDOI

A survey of multi-objective sequential decision-making

TLDR
This article surveys algorithms designed for sequential decision-making problems with multiple objectives and proposes a taxonomy that classifies multi-objective methods according to the applicable scenario, the nature of the scalarization function, and the type of policies considered.
Abstract
Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-objective problems. Therefore, we identify three distinct scenarios in which converting such a problem to a single-objective one is impossible, infeasible, or undesirable. Furthermore, we propose a taxonomy that classifies multi-objective methods according to the applicable scenario, the nature of the scalarization function (which projects multi-objective values to scalar ones), and the type of policies considered. We show how these factors determine the nature of an optimal solution, which can be a single policy, a convex hull, or a Pareto front. Using this taxonomy, we survey the literature on multi-objective methods for planning and learning. Finally, we discuss key applications of such methods and outline opportunities for future work.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI

Multiple criteria decision making

TL;DR: In this Chapter, a decision maker (or a group of experts) trying to establish or examine fair procedures to combine opinions about alternatives related to different points of view is imagined.
Posted Content

Deep Reinforcement Learning: An Overview

Yuxi Li
- 25 Jan 2017 - 
TL;DR: This work discusses core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration, and important mechanisms for RL, including attention and memory, unsupervised learning, transfer learning, multi-agent RL, hierarchical RL, and learning to learn.
Journal ArticleDOI

Deep Reinforcement Learning for Autonomous Driving: A Survey

TL;DR: This review summarises deep reinforcement learning algorithms, provides a taxonomy of automated driving tasks where (D)RL methods have been employed, highlights the key challenges algorithmically as well as in terms of deployment of real world autonomous driving agents, the role of simulators in training agents, and finally methods to evaluate, test and robustifying existing solutions in RL and imitation learning.
Posted Content

Multi-Task Learning as Multi-Objective Optimization

TL;DR: This paper cast multi-task learning as a multi-objective optimization problem, with the overall objective of finding a Pareto optimal solution, and propose an upper bound for the multiobjective loss and show that it can be optimized efficiently.
Posted Content

Challenges of Real-World Reinforcement Learning

TL;DR: A set of nine unique challenges that must be addressed to productionize RL to real world problems are presented and an example domain that has been modified to present these challenges as a testbed for practical RL research is presented.
References
More filters
Book ChapterDOI

Markov decision processes with multiple objectives

TL;DR: In this paper, the authors consider Markov decision processes with multiple discounted reward objectives and show that the Pareto curve can be approximated in polynomial time in the size of the MDP.
Proceedings ArticleDOI

Dynamic preferences in multi-criteria reinforcement learning

TL;DR: This paper considers the problem of learning in the presence of time-varying preferences among multiple objectives, using numeric weights to represent their importance, and proposes a method that allows us to store a finite number of policies, choose an appropriate policy for any weight vector and improve upon it.
Journal ArticleDOI

Tree-based reinforcement learning for optimal water reservoir operation

TL;DR: In this paper, a reinforcement learning approach, called fitted Q-iteration, is presented: it combines the principle of continuous approximation of the value functions with a process of learning off-line from experience to design daily, cyclostationary operating policies.
Proceedings Article

Managing Power Consumption and Performance of Computing Systems Using Reinforcement Learning

TL;DR: This paper applies RL in a realistic laboratory testbed using a Blade cluster and dynamically varying HTTP workload running on a commercial web applications middleware platform, and demonstrates clear performance improvements over both hand-designed policies as well as obvious "cookbook" RL implementations.
Journal Article

Markov decision processes with multiple objectives

TL;DR: It is shown that every Pareto-optimal point can be achieved by a memoryless strategy; however, unlike in the single-objective case, the memoryless strategies may require randomization.
Related Papers (5)