Open AccessProceedings Article
Prioritized Experience Replay
TLDR
Prioritized experience replay as mentioned in this paper is a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently, achieving human-level performance across many Atari games.Abstract:
Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.read more
Citations
More filters
Posted Content
Importance Resampling for Off-policy Prediction
TL;DR: This work proposes Importance Resampling (IR) for off-policy prediction, which resamples experience from a replay buffer and applies standard on-policy updates, and characterize the bias and consistency of IR, particularly compared to Weighted IS (WIS).
Proceedings ArticleDOI
Reinforcement Learning-Based Bus Holding for High-Frequency Services
TL;DR: This work introduces a Reinforcement Learning approach which is capable of making holistic bus holding decisions in realtime after the completion of a training period and demonstrates a significant improvement in scenarios with strong travel time disturbances and a slight improvement in scenario with low travel time variations.
Proceedings ArticleDOI
Comprehensible Context-driven Text Game Playing
Xusen Yin,Jonathan May +1 more
TL;DR: This paper uses a fast CNN to encode position-and syntax-oriented structures extracted from observed texts as states as states and augments the reward signal in a universal and practical manner to learn a superior agent.
Posted Content
Revisiting Fundamentals of Experience Replay
William Fedus,Prajit Ramachandran,Rishabh Agarwal,Yoshua Bengio,Hugo Larochelle,Mark Rowland,Will Dabney +6 more
TL;DR: In this paper, the authors present a systematic and extensive analysis of experience replay in Q-learning methods, focusing on two fundamental properties: the replay capacity and the ratio of learning updates to experience collected (replay ratio).
Journal ArticleDOI
Anomaly Monitoring Framework in Lane Detection With a Generative Adversarial Network
TL;DR: Experimental result shows that when using the proposed anomaly detection framework for monitoring lane abnormality, it improves the performance by 12% when compared to the vanilla recurrent neural network.
References
More filters
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI
Gradient-based learning applied to document recognition
Yann LeCun,Léon Bottou,Léon Bottou,Yoshua Bengio,Yoshua Bengio,Yoshua Bengio,Patrick Haffner +6 more
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI
Human-level control through deep reinforcement learning
Volodymyr Mnih,Koray Kavukcuoglu,David Silver,Andrei Rusu,Joel Veness,Marc G. Bellemare,Alex Graves,Martin Riedmiller,Andreas K. Fidjeland,Georg Ostrovski,Stig Petersen,Charles Beattie,Amir Sadik,Ioannis Antonoglou,Helen King,Dharshan Kumaran,Daan Wierstra,Shane Legg,Demis Hassabis +18 more
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Posted Content
Playing Atari with Deep Reinforcement Learning
Volodymyr Mnih,Koray Kavukcuoglu,David Silver,Alex Graves,Ioannis Antonoglou,Daan Wierstra,Martin Riedmiller +6 more
TL;DR: This work presents the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning, which outperforms all previous approaches on six of the games and surpasses a human expert on three of them.
Proceedings ArticleDOI
A discriminatively trained, multiscale, deformable part model
TL;DR: A discriminatively trained, multiscale, deformable part model for object detection, which achieves a two-fold improvement in average precision over the best performance in the 2006 PASCAL person detection challenge and outperforms the best results in the 2007 challenge in ten out of twenty categories.
Related Papers (5)
Human-level control through deep reinforcement learning
Mastering the game of Go with deep neural networks and tree search
David Silver,Aja Huang,Chris J. Maddison,Arthur Guez,Laurent Sifre,George van den Driessche,Julian Schrittwieser,Ioannis Antonoglou,Veda Panneershelvam,Marc Lanctot,Sander Dieleman,Dominik Grewe,John Nham,Nal Kalchbrenner,Ilya Sutskever,Timothy P. Lillicrap,Madeleine Leach,Koray Kavukcuoglu,Thore Graepel,Demis Hassabis +19 more