Open AccessPosted Content
Reinforcement Learning: A Survey
TLDR
A survey of reinforcement learning from a computer science perspective can be found in this article, where the authors discuss the central issues of RL, including trading off exploration and exploitation, establishing the foundations of RL via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state.Abstract:Â
This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.read more
Citations
More filters
Proceedings ArticleDOI
Design and evaluation of learning algorithms for dynamic resource management in virtual networks
TL;DR: Simulations show that the dynamic approach significantly improves the virtual network acceptance ratio and the maximum number of accepted virtual network requests at any time while ensuring that virtual network quality of service requirements such as packet drop rate and virtual link delay are not affected.
Journal ArticleDOI
A Multi-Agent Framework for Packet Routing in Wireless Sensor Networks
Dayong Ye,Minjie Zhang,Yun Yang +2 more
TL;DR: This paper proposes a multi-agent framework that enables each sensor node to build a cooperative neighbour set based on past routing experience and can be used to assist many existing routing approaches to improve their routing performance.
Book ChapterDOI
Formal Specification for Deep Neural Networks
Sanjit A. Seshia,Ankush Desai,Tommaso Dreossi,Daniel J. Fremont,Shromona Ghosh,Edward Kim,Sumukh Shivakumar,Marcell Vazquez-Chanlatte,Xiangyu Yue +8 more
TL;DR: The landscape of formal specification for deep neural networks is surveyed, and the opportunities and challenges for formal methods for this domain are discussed.
Posted Content
Recent Advances in Neural Question Generation.
TL;DR: A comprehensive survey of neural question generation, examining the corpora, methodologies, and evaluation methods and pointing out the potential directions ahead.
Journal ArticleDOI
A symbiotic brain-machine interface through value-based decision making.
Babak Mahmoudi,Justin C. Sanchez +1 more
TL;DR: A new BMI framework in which a computational agent symbiotically decoded users' intended actions by utilizing both motor commands and goal information directly from the brain through a continuous Perception-Action-Reward Cycle (PARC).
References
More filters
Book
Genetic algorithms in search, optimization, and machine learning
TL;DR: In this article, the authors present the computer techniques, mathematical tools, and research results that will enable both students and practitioners to apply genetic algorithms to problems in many fields, including computer programming and mathematics.
Journal ArticleDOI
Machine learning
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Book
Markov Decision Processes: Discrete Stochastic Dynamic Programming
TL;DR: Puterman as discussed by the authors provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models, focusing primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous time discrete state models.
Book
Dynamic Programming and Optimal Control
TL;DR: The leading and most up-to-date textbook on the far-ranging algorithmic methododogy of Dynamic Programming, which can be used for optimal control, Markovian decision problems, planning and sequential decision making under uncertainty, and discrete/combinatorial optimization.
Book
Parallel and Distributed Computation: Numerical Methods
TL;DR: This work discusses parallel and distributed architectures, complexity measures, and communication and synchronization issues, and it presents both Jacobi and Gauss-Seidel iterations, which serve as algorithms of reference for many of the computational approaches addressed later.