T
Thomas G. Dietterich
Researcher at Oregon State University
Publications - 286
Citations - 58937
Thomas G. Dietterich is an academic researcher from Oregon State University. The author has contributed to research in topics: Reinforcement learning & Markov decision process. The author has an hindex of 74, co-authored 279 publications receiving 51935 citations. Previous affiliations of Thomas G. Dietterich include University of Wyoming & Stanford University.
Papers
More filters
Research priorities for robust and beneficial artificial intelligence
Stuart Russell,Daniel Dewey,Max Tegmar,Anthony Aguirre,Erik Brynjolfsson,Ryan Calo,Thomas G. Dietterich,Dileep George,Bill Hibbard,Demis Hassabis,Eric Horvitz,Leslie Pack Kaelbling,James Manyika,Luke Muehlhauser,Michael Osborne,David C. Parkes,Heather R. Perkins,Francesca Rossi,Bart Selman,Murray Shanahan +19 more
TL;DR: This article gives numerous examples of worthwhile research aimed at ensuring that AI remains robust and beneficial.
Book ChapterDOI
Error-correcting output coding corrects bias and variance
TL;DR: An investigation of why the ECOC technique works, particularly when employed with decision-tree learning algorithms, shows that it can reduce the variance of the learning algorithm.
Proceedings Article
A reinforcement learning approach to job-shop scheduling
Wei Zhang,Thomas G. Dietterich +1 more
TL;DR: Reinforcement learning methods are applied to learn domain-specific heuristics for job shop scheduling to suggest that reinforcement learning can provide a new method for constructing high-performance scheduling systems.
Journal ArticleDOI
A model of the mechanical design process based on empirical data
TL;DR: The task/episode accumulation model (TEA model) of non-routine mechanical design was developed after detailed analysis of the audio and video protocols of five mechanical designers and is able to explain the behavior of designers at much finer level of detail than previous models.
Proceedings Article
The MAXQ Method for Hierarchical Reinforcement Learning
TL;DR: The paper defines a hierarchical Q learning algorithm, proves its convergence, and shows experimentally that it can learn much faster than ordinary “flat” Q learning.