scispace - formally typeset
Book ChapterDOI

The Games Computers (and People) Play

Jonathan Schaeffer
- Vol. 52, pp 1179
Reads0
Chats0
TLDR
The past, present, and future of the development of games-playing programs are discussed and some surprising changes of direction occurring that will result in games being more of an experimental testbed for mainstream AI research, with less emphasis on building world-championship-caliber programs.
Abstract
The development of high-performance game-playing programs has been one of the major successes of artificial intelligence research. The results have been outstanding but, with one notable exception (Deep Blue), they have not been widely disseminated. This talk will discuss the past, present, and future of the development of games-playing programs. Case studies for backgammon, bridge, checkers, chess, go, hex, Othello, poker, and Scrabble will be used. The research emphasis of the past has been on high performance (synonymous with brute-force search) for twoplayer perfect-information games. The research emphasis of the present encompasses multi-player imperfect/nondeterministic information games. And what of the future? There are some surprising changes of direction occurring that will result in games being more of an experimental testbed for mainstream AI research, with less emphasis on building world-championship-caliber programs. One of the most profound contributions to mankind’s knowledge has been made by the artificial intelligence (AI) research community: the realization that intelligence is not uniquely human. 1 Using computers, it is possible to achieve human-like behavior in nonhumans. In other words, the illusion of human intelligence can be created in a computer. This idea has been vividly illustrated throughout the history of computer games research. Unlike most of the early work in AI, game researchers were interested in developing high-performance, real-time solutions to challenging problems. This led to an ends-justify-the-means attitude: the result—a strong chess program—was all that mattered, not the means by which it was achieved. In contrast, much of the mainstream AI work used simplified domains, while eschewing real-time performance objectives. This research typically used human intelligence as a model: one only had to emulate the human example to achieve intelligent behavior. The battle (and philosophical) lines were drawn. The difference in philosophy can be easily illustrated. The human brain and the computer are different machines, each with its own sets of strengths and weaknesses. Humans are good at, for example, learning, reasoning by analogy, and

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Mastering the game of Go with deep neural networks and tree search

TL;DR: Using this search algorithm, the program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0.5, the first time that a computer program has defeated a human professional player in the full-sized game of Go.
Journal ArticleDOI

Monte-Carlo tree search and rapid action value estimation in computer Go

TL;DR: The Monte-Carlo revolution in computer Go is surveyed, the key ideas that led to the success of MoGo and subsequent Go programs are outlined, and for the first time a comprehensive description, in theory and in practice, of this extended framework for Monte- Carlo tree search is provided.
Proceedings Article

Rational and convergent learning in stochastic games

TL;DR: This paper introduces two properties as desirable for a learning agent when in the presence of other learning agents, namely rationality and convergence, and contributes a new learning algorithm, WoLF policy hillclimbing, that is proven to be rational.
Proceedings Article

Agent-human interactions in the continuous double auction

TL;DR: It is found that agents consistently obtain significantly larger gains from trade than their human counterparts, in sharp contrast to the robust convergence observed in previous all-human or all-agent CDA experiments.
Journal ArticleDOI

Games solved: now and in the future

TL;DR: It is concluded that decision complexity is more important than state-space complexity as a determining factor in games solved in the domain of two-person zero-sum games with perfect information and there is a trade-off between knowledge-based methods and brute-force methods.
References
More filters
Book

Search and Planning Under Incomplete Information: A Study Using Bridge Card Play

Ian Frank
TL;DR: An overview of commercial computer Bridge systems and proof-planning: solving independent goals using tactics and methods for search in games with incomplete information.
Proceedings Article

Scout: a simple game-searching algorithm with proven optimal properties

TL;DR: A new algorithm for searching games which is conceptually simple, space efficient, and analytically tractable and possesses optimal asymptotic properties and may offer practical advantages over α-β for deep searches.
Journal ArticleDOI

Statistical feature combination for the evaluation of game positions

TL;DR: Using a large number of classified Othello positions, feature weights for evaluation functions with a game-phase-independent meaning are estimated by means of logistic regression, Fisher's linear discriminant, and the quadratic discriminant function for normally distributed features.
Journal ArticleDOI

Have we witnessed a real-life Turing Test?

TL;DR: An aspect of the match, which may signify a milestone in the history of computer science: for the first time, a computer seems to have passed the Turing Test, is overlooked: the computer won by brute force.