scispace - formally typeset
Open AccessPosted Content

Reinforcement Learning: A Survey

TLDR
A survey of reinforcement learning from a computer science perspective can be found in this article, where the authors discuss the central issues of RL, including trading off exploration and exploitation, establishing the foundations of RL via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state.
Abstract
This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.

read more

Citations
More filters
Journal ArticleDOI

Preventing undesirable behavior of intelligent machines.

TL;DR: A general framework for algorithm design is introduced in which the burden of avoiding undesirable behavior is shifted from the user to the designer of the algorithm, and this framework simplifies the problem of specifying and regulating undesirable behavior.
Journal ArticleDOI

Detection of online phishing email using dynamic evolving neural network based on reinforcement learning

TL;DR: Through rigorous testing, it is demonstrated that the proposed technique can handle zero-day phishing attacks with high performance levels achieving high accuracy, TPR, and TNR at 98.63%, 99.07%, and 98.19% respectively.
Proceedings Article

Programmable Reinforcement Learning Agents

TL;DR: Together, the methods presented in this work comprise a system for agent design that allows the programmer to specify what they know, hint at what they suspect using soft shaping, and leave unspecified that which they don't know; the system then optimally completes the program through experience and takes advantage of the hierarchical structure of the specified program to speed learning.
Journal ArticleDOI

Learning-Based Energy-Efficient Data Collection by Unmanned Vehicles in Smart Cities

TL;DR: This paper proposes to leverage emerging deep reinforcement learning (DRL) techniques for enabling model-free unmanned vehicles control, and presents a novel and highly effective control framework, called “DRL-RVC,” which utilizes the powerful convolutional neural network for feature extraction of the necessary information and makes decisions under the guidance of the deep Q network.
Journal ArticleDOI

Real-Time State Estimation in a Flight Simulator Using fNIRS

TL;DR: An on-line fNIRS-based inference system that integrates two complementary estimators that match significantly better than chance with the pilot’s real state and establish reusable blocks for further fNirS- based passive brain computer interface development.
References
More filters
Book

Genetic algorithms in search, optimization, and machine learning

TL;DR: In this article, the authors present the computer techniques, mathematical tools, and research results that will enable both students and practitioners to apply genetic algorithms to problems in many fields, including computer programming and mathematics.
Journal ArticleDOI

Machine learning

TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Book

Markov Decision Processes: Discrete Stochastic Dynamic Programming

TL;DR: Puterman as discussed by the authors provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models, focusing primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous time discrete state models.
Book

Dynamic Programming and Optimal Control

TL;DR: The leading and most up-to-date textbook on the far-ranging algorithmic methododogy of Dynamic Programming, which can be used for optimal control, Markovian decision problems, planning and sequential decision making under uncertainty, and discrete/combinatorial optimization.
Book

Parallel and Distributed Computation: Numerical Methods

TL;DR: This work discusses parallel and distributed architectures, complexity measures, and communication and synchronization issues, and it presents both Jacobi and Gauss-Seidel iterations, which serve as algorithms of reference for many of the computational approaches addressed later.
Related Papers (5)