scispace - formally typeset
Open AccessJournal ArticleDOI

Mastering the game of Go without human knowledge

Reads0
Chats0
TLDR
An algorithm based solely on reinforcement learning is introduced, without human data, guidance or domain knowledge beyond game rules, that achieves superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.
Abstract
A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo. Starting from zero knowledge and without human data, AlphaGo Zero was able to teach itself to play Go and to develop novel strategies that provide new insights into the oldest of games. To beat world champions at the game of Go, the computer program AlphaGo has relied largely on supervised learning from millions of human expert moves. David Silver and colleagues have now produced a system called AlphaGo Zero, which is based purely on reinforcement learning and learns solely from self-play. Starting from random moves, it can reach superhuman level in just a couple of days of training and five million games of self-play, and can now beat all previous versions of AlphaGo. Because the machine independently discovers the same fundamental principles of the game that took humans millennia to conceptualize, the work suggests that such principles have some universal character, beyond human bias.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Learning with AMIGo: Adversarially Motivated Intrinsic Goals

TL;DR: AMIGo is proposed, a novel agent incorporating a goal-generating teacher that proposes Adversarially Motivated Intrinsic Goals to train aGoal-conditioned "student" policy in the absence of (or alongside) environment reward to solve challenging procedurally-generated tasks.
Journal ArticleDOI

Tianjic: A Unified and Scalable Chip Bridging Spike-Based and Continuous Neural Computation

TL;DR: A unified model description framework and a unified processing architecture (Tianjic), which covers the full stack from software to hardware, and a compatible routing infrastructure that enables homogeneous and heterogeneous scalability on a decentralized many-core network.
Journal ArticleDOI

Hierarchical Tracking by Reinforcement Learning-Based Searching and Coarse-to-Fine Verifying

TL;DR: This work proposes a hierarchical tracker that learns to move and track based on the combination of data-driven search at the coarse level and coarse-to-fine verification at the fine level, and utilizes a recurrent convolutional neural network-based deep Q-network to effectively learn data- driven searching policies.
Proceedings Article

Learning Reward Machines for Partially Observable Reinforcement Learning

TL;DR: It is shown that RMs can be learned from experience, instead of being specified by the user, and that the resulting problem decomposition can be used to effectively solve partially observable RL problems.
Journal ArticleDOI

An adaptive deep reinforcement learning approach for MIMO PID control of mobile robots.

TL;DR: The proposed intelligent control system based on a deep reinforcement learning approach for self-adaptive multiple PID controllers for mobile robots demonstrated that it can be of aid by providing with behavior that can compensate or even adapt to changes in the uncertain environments providing a model free unsupervised solution.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Book

Reinforcement Learning: An Introduction

TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Related Papers (5)