P
Percy Liang
Researcher at Stanford University
Publications - 369
Citations - 42254
Percy Liang is an academic researcher from Stanford University. The author has contributed to research in topics: Computer science & Parsing. The author has an hindex of 75, co-authored 306 publications receiving 29242 citations. Previous affiliations of Percy Liang include University of California, Berkeley & Google.
Papers
More filters
Posted Content
SQuAD: 100,000+ Questions for Machine Comprehension of Text
TL;DR: The Stanford Question Answering Dataset (SQuAD) as mentioned in this paper is a reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage.
Proceedings ArticleDOI
SQuAD: 100,000+ Questions for Machine Comprehension of Text
TL;DR: The Stanford Question Answering Dataset (SQuAD) as mentioned in this paper is a reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage.
Proceedings Article
Semantic Parsing on Freebase from Question-Answer Pairs
TL;DR: This paper trains a semantic parser that scales up to Freebase and outperforms their state-of-the-art parser on the dataset of Cai and Yates (2013), despite not having annotated logical forms.
Posted Content
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh,Percy Liang +1 more
TL;DR: This paper uses influence functions — a classic technique from robust statistics — to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction.
Proceedings ArticleDOI
Know What You Don't Know: Unanswerable Questions for SQuAD
TL;DR: SQuADRUn as discussed by the authors is a new dataset that combines the existing Stanford Question Answering Dataset with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones.