Journal ArticleDOI
Mastering the game of Go with deep neural networks and tree search
David Silver,Aja Huang,Chris J. Maddison,Arthur Guez,Laurent Sifre,George van den Driessche,Julian Schrittwieser,Ioannis Antonoglou,Veda Panneershelvam,Marc Lanctot,Sander Dieleman,Dominik Grewe,John Nham,Nal Kalchbrenner,Ilya Sutskever,Timothy P. Lillicrap,Madeleine Leach,Koray Kavukcuoglu,Thore Graepel,Demis Hassabis +19 more
TLDR
Using this search algorithm, the program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0.5, the first time that a computer program has defeated a human professional player in the full-sized game of Go.Abstract:
The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of stateof-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.read more
Citations
More filters
Journal ArticleDOI
Relationship journeys in the internet of things: a new framework for understanding interactions between consumers and smart objects
Thomas P. Novak,Donna L. Hoffman +1 more
TL;DR: In this article, the authors present a new framework for consumer-object relationships based on the circumplex model of interpersonal complementarity and situated in assemblage theory and object-oriented ontology.
Proceedings ArticleDOI
Asymmetric Actor Critic for Image-Based Robot Learning
TL;DR: The authors exploit the full state observability in the simulator to train better policies which take as input only partial observations (RGBD images) by employing an actor-critic training algorithm in which the critic is trained on full states while the actor (or policy) gets rendered images as input.
Journal ArticleDOI
Deep Optimal Stopping
TL;DR: In this paper, a deep learning method for optimal stopping problems was developed, which directly learns the optimal stopping rule from Monte Carlo samples and is broadly applicable in situations where the underlying randomness can efficiently be simulated.
Journal ArticleDOI
Reinforcement Learning for Electric Power System Decision and Control: Past Considerations and Perspectives
TL;DR: In this paper, the authors review past and very recent research considerations in using reinforcement learning (RL) to solve electric power system decision and control problems, and analyse the perspectives of RL approaches in light of the emergence of new generation, communications, and instrumentation technologies currently in use, or available for future use, in power systems.
Posted Content
A Very Brief Introduction to Machine Learning With Applications to Communication Systems
TL;DR: This tutorial-style paper provides a high-level introduction to the basics of supervised and unsupervised learning, exemplifying applications to communication networks by distinguishing tasks carried out at the edge and at the cloud segments of the network at different layers of the protocol stack.
References
More filters
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI
Deep learning
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Book
Deep Learning
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Book
Reinforcement Learning: An Introduction
TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Journal ArticleDOI
Human-level control through deep reinforcement learning
Volodymyr Mnih,Koray Kavukcuoglu,David Silver,Andrei Rusu,Joel Veness,Marc G. Bellemare,Alex Graves,Martin Riedmiller,Andreas K. Fidjeland,Georg Ostrovski,Stig Petersen,Charles Beattie,Amir Sadik,Ioannis Antonoglou,Helen King,Dharshan Kumaran,Daan Wierstra,Shane Legg,Demis Hassabis +18 more
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.