scispace - formally typeset
T

Thomas G. Dietterich

Researcher at Oregon State University

Publications -  286
Citations -  58937

Thomas G. Dietterich is an academic researcher from Oregon State University. The author has contributed to research in topics: Reinforcement learning & Markov decision process. The author has an hindex of 74, co-authored 279 publications receiving 51935 citations. Previous affiliations of Thomas G. Dietterich include University of Wyoming & Stanford University.

Papers
More filters
Book ChapterDOI

A comparative study of ID3 and backpropagation for English text-to-speech mapping

TL;DR: The performance of the error backpropagation (BP) and ID3 learning algorithms was compared on the task of mapping English text to phonemes and stresses and it was shown that BP consistently out-performs ID3 on this task by several percentage points.
Proceedings Article

Learning probabilistic behavior models in real-time strategy games

TL;DR: The behavior model is based on the well-developed and generic paradigm of hidden Markov models, which supports a variety of uses for the design of AI players and human assistants and provides both a qualitative and quantitative assessment of the learned model's utility.
Proceedings Article

Learnability of the Superset Label Learning Problem

TL;DR: Empirical Risk Minimizing learners that use the superset error as the empirical risk measure are analyzed and the conditions for ERM learnability and sample complexity for the realizable case are given.
Proceedings ArticleDOI

Learning non-redundant codebooks for classifying complex objects

TL;DR: A simple yet effective framework for learning multiple non-redundant codebooks to extract discriminative information that was not captured by preceding codebooks and their corresponding classifiers is described.
Proceedings Article

Low bias bagged support vector machines

TL;DR: Experiments indicate that bagging of low-bias SVMs (the "lobag" algorithm) never hurts generalization performance and often improves it compared with well-tuned single SVMs and to bags of individually well- Tuned SVMs.