T
Thomas G. Dietterich
Researcher at Oregon State University
Publications - 286
Citations - 58937
Thomas G. Dietterich is an academic researcher from Oregon State University. The author has contributed to research in topics: Reinforcement learning & Markov decision process. The author has an hindex of 74, co-authored 279 publications receiving 51935 citations. Previous affiliations of Thomas G. Dietterich include University of Wyoming & Stanford University.
Papers
More filters
Reinforcement learning in scheduling
TL;DR: This preliminary paper shows that learning to solve scheduling problems such as the Space Shuttle Payload Processing and the Automatic Guided Vehicle scheduling can be usefully studied in the reinforcement learning framework.
Journal ArticleDOI
Integrating Learning from Examples into the Search for Diagnostic Policies
TL;DR: In this article, a new family of systematic search algorithms based on the AO* algorithm is proposed to solve the problem of learning diagnostic policies from training examples, which is a complete description of the decision-making actions of a diagnostician.
Book ChapterDOI
Machine learning in engineering automation
Steve Chien,Bradley L. Whitehall,Thomas G. Dietterich,Richard J. Doyle,Brian Falkenhainer,James Garrett,Stephen C.-Y. Lu +6 more
TL;DR: A taxonomy of engineering tasks for application of machine learning technology is described and described, including noisy data, continuous quantities, mathematical formulas, large problem spaces, incorporating multiple sources and forms of knowledge, and the need for user-system interaction.
KI-LEARN: Knowledge-Intensive Learning Methods for Knowledge-Rich/Data-Poor Domains
Thomas G. Dietterich,Angelo C. Restificar,Prasad Tadepalli,Bruce D'Ambrosio,Jon Herlocker,Alan Fern,Eric E. Altendorf,Sriraam Natarajan,Jianqiang Shen,Xinlong Bao +9 more
TL;DR: The goal in this research effort was to develop a new methodology, called KI-LEARN (Knowledge Intensive LEARNing), that combines domain knowledge and sparse training data to construct high-performance systems and specifically shows how qualitative constraints can be incorporated into learning algorithms.