scispace - formally typeset
T

Thomas G. Dietterich

Researcher at Oregon State University

Publications -  286
Citations -  58937

Thomas G. Dietterich is an academic researcher from Oregon State University. The author has contributed to research in topics: Reinforcement learning & Markov decision process. The author has an hindex of 74, co-authored 279 publications receiving 51935 citations. Previous affiliations of Thomas G. Dietterich include University of Wyoming & Stanford University.

Papers
More filters
Posted Content

Learning Scripts as Hidden Markov Models

TL;DR: This paper proposed the first formal framework for scripts based on Hidden Markov Models (HMMs), which can be applied to make a variety of inferences including filling gaps in the narratives and resolving ambiguous references.
Proceedings Article

K-N-MOMDPs: Towards Interpretable Solutions for Adaptive Management.

TL;DR: In this article, the authors provide algorithms to solve K-N-MOMDPs, where K represents the maximum number of fully observable states and N represents the number of alpha-vectors.
Proceedings ArticleDOI

Active EM to reduce noise in activity recognition

TL;DR: Experimental results on real users show this active EM algorithm can significantly improve the prediction precision, and that it performs better than either EM or active learning alone.
Book ChapterDOI

A Multi-agent Architecture Integrating Learning and Fuzzy Techniques for Landmark-Based Robot Navigation

TL;DR: The suitability of reinforcement learning for automatically tuning agents within a MAS to optimize a complex tradeoff, namely the camera use, is explored.

Exploiting monotonicity via logistic regression in Bayesian network learning

TL;DR: Two variants of the constrained logistic regression model, M2b CLR and M3 CLR, are presented, in which the number of constraints required to implement monotonicity does not grow exponentially with thenumber of parents hence providing a practicable method for estimating conditional probabilities with very sparse data.