K
Katherine Driggs-Campbell
Researcher at University of Illinois at Urbana–Champaign
Publications - 67
Citations - 1306
Katherine Driggs-Campbell is an academic researcher from University of Illinois at Urbana–Champaign. The author has contributed to research in topics: Reinforcement learning & Computer science. The author has an hindex of 15, co-authored 58 publications receiving 695 citations. Previous affiliations of Katherine Driggs-Campbell include Carnegie Mellon University & Stanford University.
Papers
More filters
Journal ArticleDOI
Combining Planning and Deep Reinforcement Learning in Tactical Decision Making for Autonomous Driving
TL;DR: A general framework for tactical decision making is introduced, which combines the concepts of planning and learning, in the form of Monte Carlo tree search and deep reinforcement learning, based on the AlphaGo Zero algorithm, extended to a domain with a continuous state space where self-play cannot be used.
Posted Content
HG-DAgger: Interactive Imitation Learning with Human Experts
TL;DR: HG-DAgger is proposed, a variant of DAgger that is more suitable for interactive imitation learning from human experts in real-world systems and learns a safety threshold for a model-uncertainty-based risk metric that can be used to predict the performance of the fully trained novice in different regions of the state space.
Proceedings Article
Data-Driven Probabilistic Modeling and Verification of Human Driver Behavior
Dorsa Sadigh,Katherine Driggs-Campbell,Alberto Puggelli,Wenchao Li,Victor Shia,Ruzena Bajcsy,Alberto Sangiovanni-Vincentelli,S. Shankar Sastry,Sanjit A. Seshia +8 more
TL;DR: A novel stochastic model of the driver behavior based on Markov chains in which the transition probabilities are only known to lie in convex uncertainty sets is proposed, and properties of the model expressed in probabilistic computation tree logic (PCTL) are formally verified.
Proceedings ArticleDOI
Adaptive Stress Testing with Reward Augmentation for Autonomous Vehicle Validatio
TL;DR: In this article, a modification of the Adaptive Stress Testing (AST) method is proposed to discover a larger and more expressive subset of the failure space when compared to the original AST formulation, which is able to identify useful failure scenarios of an autonomous vehicle policy.
Proceedings ArticleDOI
EnsembleDAgger: A Bayesian Approach to Safe Imitation Learning
TL;DR: This work presents a probabilistic extension to DAgger, which attempts to quantity the confidence of the novice policy as a proxy for safety, and approximates a Gaussian Process using an ensemble of neural networks.