scispace - formally typeset
T

Thomas G. Dietterich

Researcher at Oregon State University

Publications -  286
Citations -  58937

Thomas G. Dietterich is an academic researcher from Oregon State University. The author has contributed to research in topics: Reinforcement learning & Markov decision process. The author has an hindex of 74, co-authored 279 publications receiving 51935 citations. Previous affiliations of Thomas G. Dietterich include University of Wyoming & Stanford University.

Papers
More filters
Patent

Methods for assisting computer users performing multiple tasks

TL;DR: In this paper, a method for assisting multi-tasking computer users includes receiving from a user a specification of a task being performed by the user or an indication of completion of the task.
Proceedings Article

Applying the Waek Learning Framework to Understand and Improve C4.5.

TL;DR: This paper performs experiments suggested by the formal results for Adaboost and C4:5 within the weak learning framework, and argues through experimental results that the theory must be understood in terms of a measure of a boosting algorithm's behavior called its advantage sequence.
Journal ArticleDOI

Discovering patterns in sequences of events

TL;DR: A program, called SPARC/E, is described that implements most of the methodology as applied to discovering sequence generating rules in the card game Eleusis, and is used as a source of examples for illustrating the performance of SPARC /E.
Patent

Machine-learning approach to modeling biological activity for molecular design and to modeling other characteristics

TL;DR: In this paper, the authors combine explicit representation of molecular shape of molecules with neural network learning methods to provide models with high predictive ability that generalize to different chemical classes where structurally diverse molecules exhibiting similar surface characteristics are treated as similar.
Book ChapterDOI

An Overview of MAXQ Hierarchical Reinforcement Learning

TL;DR: An overview of the MAXQ value function decomposition and its support for state abstraction and action abstraction is given.