scispace - formally typeset
T

Thomas G. Dietterich

Researcher at Oregon State University

Publications -  286
Citations -  58937

Thomas G. Dietterich is an academic researcher from Oregon State University. The author has contributed to research in topics: Reinforcement learning & Markov decision process. The author has an hindex of 74, co-authored 279 publications receiving 51935 citations. Previous affiliations of Thomas G. Dietterich include University of Wyoming & Stanford University.

Papers
More filters
Posted Content

Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

TL;DR: In this paper, the authors established rigorous benchmarks for image classifier robustness and proposed ImageNet-C, a robustness benchmark that evaluates performance on common corruptions and perturbations not worst-case adversarial perturbation.
Book

Adaptive computation and machine learning

TL;DR: This book attempts to give an overview of the different recent efforts to deal with covariate shift, a challenging situation where the joint distribution of inputs and outputs differs between the training and test stages.
Proceedings Article

Deep Anomaly Detection with Outlier Exposure

TL;DR: In extensive experiments on natural language processing and small- and large-scale vision tasks, it is found that Outlier Exposure significantly improves detection performance and that cutting-edge generative models trained on CIFar-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; OE is used to mitigate this issue.
Proceedings Article

Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

TL;DR: This paper standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications, and proposes a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations.
Proceedings Article

Learning with many irrelevant features

TL;DR: It is shown that any learning algorithm implementing the MIN-FEATURES bias requires Θ(1/e ln 1/δ+ 1/e[2p + p ln n]) training examples to guarantee PAC-learning a concept having p relevant features out of n available features, and suggests that training data should be preprocessed to remove irrelevant features before being given to ID3 or FRINGE.