scispace - formally typeset
Search or ask a question
Author

Thomas G. Dietterich

Bio: Thomas G. Dietterich is an academic researcher from Oregon State University. The author has contributed to research in topics: Reinforcement learning & Markov decision process. The author has an hindex of 74, co-authored 279 publications receiving 51935 citations. Previous affiliations of Thomas G. Dietterich include University of Wyoming & Stanford University.


Papers
More filters
Proceedings ArticleDOI
TL;DR: A conditional mixture model for predicting the presence and amount of rain at a weather station based on measurements at nearby stations based on evaluations on simulated faults from the Oklahoma Mesonet shows very good performance.
Abstract: Rainfall is a very important weather variable, especially for agriculture. Unfortunately, rain gauges fail frequently. This paper describes a conditional mixture model for predicting the presence and amount of rain at a weather station based on measurements at nearby stations. The model is evaluated on simulated faults (blocked rain gauges) inserted into observations from the Oklahoma Mesonet. Using the negative log-likelihood as an anomaly score, we evaluate the area under the ROC and precision-recall curves for detecting these faults. The results show very good performance.
Proceedings Article
01 Jan 2015
TL;DR: A domain agnostic visual analytic design and implementation for testing and debugging MDPs: MDPVIS, which treats optimization as an open-ended process whose parameters are repeatedly changed forTesting and debugging.
Abstract: Markov Decision Process (MDP) simulators and optimization algorithms integrate several systems and functions that are collectively subject to failures of specification, implementation, integration, and optimization. We present a domain agnostic visual analytic design and implementation for testing and debugging MDPs: MDPVIS. A common approach for solving Markov Decision Processes is to implement a simulator of the stochastic dynamics of the MDP and a Monte Carlo optimization algorithm that invokes this simulator. The resulting software system is often realized by integrating several systems and functions that are collectively subject to failures (referred to as “bugs”) of specification, implementation, integration, and optimization. Since bugs are subject to the same stochastic processes as their underlying systems, detecting and characterizing bugs requires exploration with an “informed trial and error” (Sedlmair et al. 2014) process. This process involves writing an interactive client to manually execute transitions, followed by a visualization of state development as a policy rule is followed. A domain agnostic visual analytic interface could facilitate testing and debugging, but during semi-structured interviews of MDP researchers we did not find anyone using a generic visualization tool for testing. We posit this is because researchers have heretofore not had access to a visualization that is easily connected to their MDP simulator and MDP optimizer. In the following section we summarize the implementation and integration of MDPVIS presented in McGregor et al. (2015). Implementation and Integration MDPVIS’ target users are researchers interested in steering the optimization itself, simulator developers who are interested in ensuring the policies optimized for the problem domain are well founded, or domain policy experts primarily interested in the outcomes produced by the optimized policy. In real-world settings these roles can be filled by a sinCopyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. gle person, or each role can be performed by a large team of developers and domain experts. MDPVIS extends computational steering from the high performance scientific visualization community (Parker et al. 1996). Whereas computational steering traditionally refers to modifying a computer process during its execution (Mulder, van Wijk, and van Liere 1999), we treat optimization as an open-ended process whose parameters are repeatedly changed for testing and debugging. Sedlmair et al. (2014) label techniques for understanding the relationship between input parameters and outputs as Parameter Space Analysis (PSA), “...the systematic variation of model input parameters, generating outputs for each combination of parameters, and investigating the relation between parameter settings and corresponding outputs.” This is a suitable definition for the MDP debugging and testing processes. Testing for MDP bugs requires exploring Monte Carlo rollouts. These rollouts are the output of the system under test, but since the distribution of these rollouts is defined by applying a policy in many successive states, the rollouts are tightly coupled with the parameter space of the MDP’s component functions. Similarly, establishing bug causality (debugging) requires varying the model parameters and examining the resulting rollouts. The VL/HCC paper explores MDP testing questions in the following six broad tasks introduced by Sedlmair et al. (2014): 1. Fitting: Do the outputs match real-world data or expectations? 2. Outliers: Are low probability events occurring with unexpected frequency? 3. Partition: Do different system parameters produce the expected differences? 4. Optimization: Did the optimization algorithm find the local optimum and does the policy exploit a bug in the specification or implementation? 5. Uncertainty: How confident are we in the proposed results? 6. Sensitivity: Do small changes to the system result in big changes to the optimized policy? To test these questions, MDPVIS (Figure 1) has four computational steering control sets and three visualization areas. The controls give the reward, model, and policy parameters that are exposed by the MDP’s software. These layers are memoized in an exploration history that records the parameters and rollouts computed by the MDP. The first visualization area shows state distributions at time steps under the current policy. The second visualization area gives the distribution of a variable’s development through time. The last visualization area gives details of individual states. Each of these steering controls and visualizations are designed to integrate with MDP simulators and optimizers using the same read-eval-print loop (REPL) that is typically implemented in current development practices. We built MDPVIS as a data-driven web application. The web stack emphasizes standard data interchange formats that are easily linked to MDP simulators and optimization algorithms. We identified four HTTP requests (initialize, rollouts, optimize, and state) that are answered by the MDP simulator or optimizer. In each case the current values of the steering controls are sent to a web server acting as a bridge between the HTTP request and the syntax expected by the REPL. 1. /initialize – Ask for the steering parameters that should be displayed to the user. The parameters are a list of tuples, each containing the name, description, minimum value, maximum value, and current value of a parameter. These parameters are then rendered as HTML input elements for the user to modify. Following initialization, MDPVIS automatically requests rollouts for the initial parameters. 2. /rollouts?QUERY – Get rollouts for the parameters that are currently defined in the user interface. The server returns an array of arrays containing the state variables that should be rendered for each time step. 3. /optimize?QUERY – Get a newly optimized policy. This sends all the parameters defined in the user interface and the optimization algorithm returns a newly optimized policy. 4. /state?IDENTIFIER – Get the full state details and images. This is required for high dimensional problems in which the entire state cannot be returned to the client for every state in a rollout The most domain-specific element of any MDP visualization is the representation of a specific state. In Figure 1 individual states are given as two dimensional images of landscape fuel levels. This is a visualization that our forestry colleagues typically generate for natural resource domains. The fourth HTTP request can optionally return images to accommodate these landscapes and arbitrary domain images. These landscapes can be rendered without any changes to the MDPVIS code base. A live version of the visualization is available at MDPvis.github.io for a wildfire suppression policy domain (Houtman et al. 2013). The visualization has been tested on Google Chrome and Firefox and is responsive to a variety of screen resolutions. In the VL/HCC paper we presented a use-case study to provide anecdotal evidence of the utility of MDPVIS on the wildfire problem. The case study involved user sessions with our forestry economics collaborators who have formulated an MDP optimization problem to study fire suppression policies. When applying MDPVIS we found numerous simulator and the optimization bugs.
Journal Article
TL;DR: In this article, the authors propose a new deep model that is able to directly learn and predict over this irregularly sampled data, without voxelization, by leveraging a recent convolutional architecture for static point clouds.
Abstract: We consider the problem of modeling the dynamics of continuous spatial-temporal processes represented by irregular samples through both space and time. Such processes occur in sensor networks, citizen science, multi-robot systems, and many others. We propose a new deep model that is able to directly learn and predict over this irregularly sampled data, without voxelization, by leveraging a recent convolutional architecture for static point clouds. The model also easily incorporates the notion of multiple entities in the process. In particular, the model can flexibly answer prediction queries about arbitrary space-time points for different entities regardless of the distribution of the training or test-time data. We present experiments on real-world weather station data and battles between large armies in StarCraft II. The results demonstrate the model's flexibility in answering a variety of query types and demonstrate improved performance and efficiency compared to state-of-the-art baselines.
Journal ArticleDOI
TL;DR: It is proved that bounds on their error are proved and it is shown that removing features with small weights has little influence on prediction accuracy, suggesting that these methods exhibit feature selection ability.
Journal ArticleDOI
TL;DR: In this article , the exogenous and endogenous subspaces of the state space are discovered through linear combination of state variables and rewards, and the reward function can be decomposed into an exogenous Markov reward process and an endogenous Markov decision process (optimizing the endogenous reward).
Abstract: Exogenous state variables and rewards can slow reinforcement learning by injecting uncontrolled variation into the reward signal. This paper formalizes exogenous state variables and rewards and shows that if the reward function decomposes additively into endogenous and exogenous components, the MDP can be decomposed into an exogenous Markov Reward Process (based on the exogenous reward) and an endogenous Markov Decision Process (optimizing the endogenous reward). Any optimal policy for the endogenous MDP is also an optimal policy for the original MDP, but because the endogenous reward typically has reduced variance, the endogenous MDP is easier to solve. We study settings where the decomposition of the state space into exogenous and endogenous state spaces is not given but must be discovered. The paper introduces and proves correctness of algorithms for discovering the exogenous and endogenous subspaces of the state space when they are mixed through linear combination. These algorithms can be applied during reinforcement learning to discover the exogenous space, remove the exogenous reward, and focus reinforcement learning on the endogenous MDP. Experiments on a variety of challenging synthetic MDPs show that these methods, applied online, discover large exogenous state spaces and produce substantial speedups in reinforcement learning.

Cited by
More filters
Journal ArticleDOI
01 Oct 2001
TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Abstract: Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, aaa, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.

79,257 citations

Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Book
01 Jan 1988
TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Abstract: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.

37,989 citations