scispace - formally typeset
Open AccessJournal ArticleDOI

Mastering the game of Go without human knowledge

Reads0
Chats0
TLDR
An algorithm based solely on reinforcement learning is introduced, without human data, guidance or domain knowledge beyond game rules, that achieves superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.
Abstract
A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo. Starting from zero knowledge and without human data, AlphaGo Zero was able to teach itself to play Go and to develop novel strategies that provide new insights into the oldest of games. To beat world champions at the game of Go, the computer program AlphaGo has relied largely on supervised learning from millions of human expert moves. David Silver and colleagues have now produced a system called AlphaGo Zero, which is based purely on reinforcement learning and learns solely from self-play. Starting from random moves, it can reach superhuman level in just a couple of days of training and five million games of self-play, and can now beat all previous versions of AlphaGo. Because the machine independently discovers the same fundamental principles of the game that took humans millennia to conceptualize, the work suggests that such principles have some universal character, beyond human bias.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning

TL;DR: An implicit under-parameterization phenomenon in value-based deep RL methods that use bootstrapping is identified: when value functions are trained with gradient descent using iterated regression onto target values generated by previous instances of the value network, more gradient updates decrease the expressivity of the current value network.
Posted Content

Finite-Sample Analysis of Nonlinear Stochastic Approximation with Applications in Reinforcement Learning

TL;DR: This paper studies a nonlinear Stochastic Approximation (SA) algorithm under Markovian noise, and derives its finite-sample convergence bounds, and shows the finite- sample bounds of the popular Q-learning with linear function approximation algorithm for solving the RL problem.
Posted Content

VRKitchen: an Interactive 3D Virtual Environment for Task-oriented Learning.

TL;DR: This work design and implement a virtual reality (VR) system, VRKitchen, with integrated functions which enable embodied agents powered by modern AI methods to perform complex tasks involving a wide range of fine-grained object manipulations in a realistic environment and allow human teachers to perform demonstrations to train agents.
Posted Content

When Blockchain Meets AI: Optimal Mining Strategy Achieved By Machine Learning

TL;DR: Experimental results indicate that, without knowing the parameter values of the mining MDP model, the multidimensional RL mining algorithm can still achieve optimal performance over time‐varying blockchain networks.
Journal ArticleDOI

Evolution of Bio-Inspired Artificial Synapses: Materials, Structures, and Mechanisms.

TL;DR: This work reviews recent progress on artificial synapses, and synaptic plasticity and functional emulation are introduced, and then synaptic electronic devices for neuromorphic computing systems are discussed.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Book

Reinforcement Learning: An Introduction

TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Related Papers (5)