D
Dheeru Dua
Researcher at University of California, Irvine
Publications - 26
Citations - 1697
Dheeru Dua is an academic researcher from University of California, Irvine. The author has contributed to research in topics: Reading comprehension & Computer science. The author has an hindex of 9, co-authored 24 publications receiving 1226 citations. Previous affiliations of Dheeru Dua include Allen Institute for Artificial Intelligence & IBM.
Papers
More filters
Proceedings ArticleDOI
DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs
TL;DR: A new reading comprehension benchmark, DROP, which requires Discrete Reasoning Over the content of Paragraphs, and presents a new model that combines reading comprehension methods with simple numerical reasoning to achieve 51% F1.
Posted Content
Generating Natural Adversarial Examples
TL;DR: This paper proposes a framework to generate natural and legible adversarial examples that lie on the data manifold, by searching in semantic space of dense and continuous data representation, utilizing the recent advances in generative adversarial networks.
Posted Content
DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs
TL;DR: This article introduced a new English reading comprehension benchmark, DROP, which requires Discrete Reasoning Over the content of Paragraphs, and applied state-of-the-art methods from both the reading comprehension and semantic parsing literature on this dataset and show that the best systems only achieved 32.7% F1 on their generalized accuracy metric, while expert human performance is 96.0%.
Proceedings ArticleDOI
Evaluating Models’ Local Decision Boundaries via Contrast Sets
Matt Gardner,Yoav Artzi,Victoria Basmov,Jonathan Berant,Ben Bogin,Sihao Chen,Pradeep Dasigi,Dheeru Dua,Yanai Elazar,Ananth Gottumukkala,Nitish Gupta,Hannaneh Hajishirzi,Gabriel Ilharco,Daniel Khashabi,Kevin Lin,Jiangming Liu,Nelson F. Liu,Phoebe Mulcaire,Qiang Ning,Sameer Singh,Noah A. Smith,Sanjay Subramanian,Reut Tsarfaty,Eric Wallace,Ally Zhang,Ben Zhou +25 more
TL;DR: A more rigorous annotation paradigm for NLP that helps to close systematic gaps in the test data, and recommends that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets.
Proceedings Article
Generating Natural Adversarial Examples
TL;DR: This article proposed a framework to generate natural and legible adversarial examples that lie on the data manifold, by searching in semantic space of dense and continuous data representation, utilizing the recent advances in generative adversarial networks.