M
Mandar Joshi
Researcher at University of Washington
Publications - 30
Citations - 17885
Mandar Joshi is an academic researcher from University of Washington. The author has contributed to research in topics: Coreference & Sentence. The author has an hindex of 12, co-authored 28 publications receiving 8886 citations. Previous affiliations of Mandar Joshi include Facebook & Visvesvaraya National Institute of Technology.
Papers
More filters
Posted Content
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu,Myle Ott,Naman Goyal,Jingfei Du,Mandar Joshi,Danqi Chen,Omer Levy,Michael Lewis,Luke Zettlemoyer,Veselin Stoyanov +9 more
TL;DR: It is found that BERT was significantly undertrained, and can match or exceed the performance of every model published after it, and the best model achieves state-of-the-art results on GLUE, RACE and SQuAD.
Proceedings ArticleDOI
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
TL;DR: It is shown that, in comparison to other recently introduced large-scale datasets, TriviaQA has relatively complex, compositional questions, has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and requires more cross sentence reasoning to find answers.
Journal ArticleDOI
SpanBERT: Improving Pre-training by Representing and Predicting Spans
TL;DR: The approach extends BERT by masking contiguous random spans, rather than random tokens, and training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it.
Posted Content
SpanBERT: Improving Pre-training by Representing and Predicting Spans
TL;DR: SpanBERT as discussed by the authors extends BERT by masking contiguous random spans, rather than random tokens, and training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it.
Proceedings ArticleDOI
BERT for Coreference Resolution: Baselines and Analysis.
TL;DR: This paper applied BERT to coreference resolution, achieving a new state-of-the-art on the GAP (+11.5 F1) and OntoNotes (+3.9 F 1) benchmarks.