scispace - formally typeset
Search or ask a question
Author

Thomas G. Dietterich

Bio: Thomas G. Dietterich is an academic researcher from Oregon State University. The author has contributed to research in topics: Reinforcement learning & Markov decision process. The author has an hindex of 74, co-authored 279 publications receiving 51935 citations. Previous affiliations of Thomas G. Dietterich include University of Wyoming & Stanford University.


Papers
More filters
01 Jan 2006
TL;DR: This chapter describes the development of general-purpose pattern-recognition algorithms for identification and classification of insects and mesofauna and the design and construction of mechanical devices for handling and photographing specimens.
Abstract: Many ecological science and environmental monitoring problems can benefit from inexpensive, automated methods of counting insect and mesofaunal populations. Existing methods for obtaining population counts require expensive and tedious manual identification by human experts. This chapter describes the development of general-purpose pattern-recognition algorithms for identification and classification of insects and mesofauna and the design and construction of mechanical devices for handling and photographing specimens. This chapter presents techniques being explored in the first two years of a four year project, along with the results obtained thus far. This project’s primary focus to date has been the classification of stonefly larvae for assessment of stream water quality. Imaging and specimen manipulation apparatus that semi-automatically provides high-resolution images of individual specimens from multiple angles has also been designed and assembled in the context of this project. An additional project target has been the development of robust classification algorithms based on interest operators, region descriptors, clustering, and 3D reconstruction to automatically classify each specimen from its images.

13 citations

Proceedings ArticleDOI
17 Dec 2015
TL;DR: MDPVIS addresses three visualization research gaps by generalizing a visualization for wildfire management and addressing the data acquisition, data analysis, and cognition gap through a generalized MDP information visualization.
Abstract: Researchers in AI and Operations Research employ the framework of Markov Decision Processes (MDPs) to formalize problems of sequential decision making under uncertainty. A common approach is to implement a simulator of the stochastic dynamics of the MDP and a Monte Carlo optimization algorithm that invokes this simulator to solve the MDP. The resulting software system is often realized by integrating several systems and functions that are collectively subject to failures of specification, implementation, integration, and optimization. We present these failures as queries for a computational steering visual analytic system (MDPVIS). MDPVIS addresses three visualization research gaps. First, the data acquisition gap is addressed through a general simulator-visualization interface. Second, the data analysis gap is addressed through a generalized MDP information visualization. Finally, the cognition gap is addressed by exposing model components to the user. MDPVIS generalizes a visualization for wildfire management. We use that problem to illustrate MDPVIS.

12 citations

Journal Article
TL;DR: This paper explores how to incorporate the considerations of evolving design requirements into feasible product development strategies in the development of an insect imaging device for the Bug ID project at Oregon State University.
Abstract: It is crucial for the development of high quality products that design requirements are identified and clarified as early as possible in the design process. In many projects the design requirements and design specifications evolve during the project cycle. Shifting needs of the customer, advancing technology, market considerations and even additional customers can cause the requirements to change. Because the different parts of a product are usually inter-connected, the requirements and specifications for one part of a product are often dependent on the requirements and the evolving design of other parts of the product. If uncontrolled, the design changes derived from evolving requirements may propagate through a design and disrupt the product development schedule, increase development costs, and result in a failure to satisfy the customers’ needs. The challenge of designing with changing requirements can be even more challenging in a product development environment where a new product is targeted and/or with interdisciplinary teams. The work of this paper explores possible design strategies for product development under changing requirements. Six design strategies were identified and implemented in the development of an insect imaging device for the Bug ID project at Oregon State University. Based on experiences throughout this interdisciplinary project, this paper explores how to incorporate the considerations of evolving design requirements into feasible product development strategies.

12 citations

Proceedings Article
09 Apr 2009
TL;DR: The application of a novel learning and problem solving architecture to the domain of airspace management, where multiple requests for the use of airspace need to be reconciled and managed automatically, is described.
Abstract: In this paper we describe the application of a novel learning and problem solving architecture to the domain of airspace management, where multiple requests for the use of airspace need to be reconciled and managed automatically. The key feature of our "Generalized Integrated Learning Architecture" (GILA) is a set of integrated learning and reasoning (ILR) systems coordinated by a central meta-reasoning executive (MRE). Each ILR learns independently from the same training example and contributes to problem-solving in concert with other ILRs as directed by the MRE. Formal evaluations show that our system performs as well as or better than humans after learning from the same training data. Further, GILA outperforms any individual ILR run in isolation, thus demonstrating the power of the ensemble architecture for learning and problem solving.

12 citations

01 Jan 2007
TL;DR: Despite great advances in search technology, information organization and retrieval challenges remain and users often still navigate manually to retrieve files since generating appropriate search terms is difficult, especially when the time gaps between subsequent accesses of documents are large.
Abstract: Despite great advances in search technology, information organization and retrieval challenges remain. Users often still navigate manually to retrieve files since generating appropriate search terms is difficult, especially when the time gaps between subsequent accesses of documents are large. Recent research [1] has shown that common search criteria, such as creation or modification time, are remembered inaccurately about 50% of the time. Even the title of a document, the most obvious search criteria, is remembered only partially correct 47% of the time and utterly incorrectly 20% of the time.

12 citations


Cited by
More filters
Journal ArticleDOI
01 Oct 2001
TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Abstract: Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, aaa, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.

79,257 citations

Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Book
01 Jan 1988
TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Abstract: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.

37,989 citations