A survey of multi-objective sequential decision-making
TLDR
This article surveys algorithms designed for sequential decision-making problems with multiple objectives and proposes a taxonomy that classifies multi-objective methods according to the applicable scenario, the nature of the scalarization function, and the type of policies considered.Abstract:
Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-objective problems. Therefore, we identify three distinct scenarios in which converting such a problem to a single-objective one is impossible, infeasible, or undesirable. Furthermore, we propose a taxonomy that classifies multi-objective methods according to the applicable scenario, the nature of the scalarization function (which projects multi-objective values to scalar ones), and the type of policies considered. We show how these factors determine the nature of an optimal solution, which can be a single policy, a convex hull, or a Pareto front. Using this taxonomy, we survey the literature on multi-objective methods for planning and learning. Finally, we discuss key applications of such methods and outline opportunities for future work.read more
Citations
More filters
Book ChapterDOI
Multiple criteria decision making
János Fodor,Marc Roubens +1 more
TL;DR: In this Chapter, a decision maker (or a group of experts) trying to establish or examine fair procedures to combine opinions about alternatives related to different points of view is imagined.
Posted Content
Deep Reinforcement Learning: An Overview
TL;DR: This work discusses core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration, and important mechanisms for RL, including attention and memory, unsupervised learning, transfer learning, multi-agent RL, hierarchical RL, and learning to learn.
Journal ArticleDOI
Deep Reinforcement Learning for Autonomous Driving: A Survey
B Ravi Kiran,Ibrahim Sobh,Victor Talpaert,Patrick Mannion,Ahmad A. Al Sallab,Senthil Yogamani,Patrick Pérez +6 more
TL;DR: This review summarises deep reinforcement learning algorithms, provides a taxonomy of automated driving tasks where (D)RL methods have been employed, highlights the key challenges algorithmically as well as in terms of deployment of real world autonomous driving agents, the role of simulators in training agents, and finally methods to evaluate, test and robustifying existing solutions in RL and imitation learning.
Posted Content
Multi-Task Learning as Multi-Objective Optimization
Ozan Sener,Vladlen Koltun +1 more
TL;DR: This paper cast multi-task learning as a multi-objective optimization problem, with the overall objective of finding a Pareto optimal solution, and propose an upper bound for the multiobjective loss and show that it can be optimized efficiently.
Posted Content
Challenges of Real-World Reinforcement Learning
TL;DR: A set of nine unique challenges that must be addressed to productionize RL to real world problems are presented and an example domain that has been modified to present these challenges as a testbed for practical RL research is presented.
References
More filters
Proceedings ArticleDOI
Policy gradient reinforcement learning for fast quadrupedal locomotion
Nate Kohl,Peter Stone +1 more
TL;DR: A machine learning approach to optimizing a quadrupedal trot gait for forward speed using a form of policy gradient reinforcement learning to automatically search the set of possible parameters with the goal of finding the fastest possible walk.
Journal ArticleDOI
The Optimal Control of Partially Observable Markov Processes over the Infinite Horizon: Discounted Costs
TL;DR: The paper develops easily implemented approximations to stationary policies based on finitely transient policies and shows that the concave hull of an approximation can be included in the well-known Howard policy improvement algorithm with subsequent convergence.
Journal ArticleDOI
Multiple Criteria Decision Making, Multiattribute Utility Theory: The Next Ten Years
TL;DR: In this paper, the history of MCDM and MAUT is discussed and topics are discussed for their continued development and usefulness to management science over the next decade, identifying exciting directions and promising areas for future research.
Journal ArticleDOI
Online planning algorithms for POMDPs
TL;DR: The objectives here are to survey the various existing online POMDP methods, analyze their properties and discuss their advantages and disadvantages; and to thoroughly evaluate these online approaches in different environments under various metrics.
Proceedings Article
Improving Elevator Performance Using Reinforcement Learning
Robert Crites,Andrew G. Barto +1 more
TL;DR: Results in simulation surpass the best of the heuristic elevator control algorithms of which the author is aware and demonstrate the power of RL on a very large scale stochastic dynamic optimization problem of practical utility.
Related Papers (5)
Human-level control through deep reinforcement learning
Mastering the game of Go with deep neural networks and tree search
David Silver,Aja Huang,Chris J. Maddison,Arthur Guez,Laurent Sifre,George van den Driessche,Julian Schrittwieser,Ioannis Antonoglou,Veda Panneershelvam,Marc Lanctot,Sander Dieleman,Dominik Grewe,John Nham,Nal Kalchbrenner,Ilya Sutskever,Timothy P. Lillicrap,Madeleine Leach,Koray Kavukcuoglu,Thore Graepel,Demis Hassabis +19 more