scispace - formally typeset
D

Dustin Morrill

Researcher at University of Alberta

Publications -  21
Citations -  1293

Dustin Morrill is an academic researcher from University of Alberta. The author has contributed to research in topics: Reinforcement learning & Regret. The author has an hindex of 9, co-authored 19 publications receiving 933 citations.

Papers
More filters
Journal ArticleDOI

DeepStack: Expert-level artificial intelligence in heads-up no-limit poker

TL;DR: DeepStack is introduced, an algorithm for imperfect-information settings that combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition that is automatically learned from self-play using deep learning.
Journal ArticleDOI

DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker

TL;DR: DeepStack as discussed by the authors combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition that is automatically learned from self-play using deep learning.
Proceedings Article

Solving games with functional regret estimation

TL;DR: This paper proposed an online learning method for minimizing regret in large extensive-form games, which learns a function approximator online to estimate the regret for choosing a particular action and uses these estimates in place of the true regrets to define a sequence of policies.
Proceedings ArticleDOI

Computing Approximate Equilibria in Sequential Adversarial Games by Exploitability Descent

TL;DR: The exploitability descent algorithm is presented, a new algorithm to compute approximate equilibria in two-player zero-sum extensive-form games with imperfect information, by direct policy optimization against worst-case opponents, and it is proved that when following this optimization, the exploitability of a player's strategy converges asymptotically to zero.