S
Sihao Chen
Researcher at University of Pennsylvania
Publications - 15
Citations - 571
Sihao Chen is an academic researcher from University of Pennsylvania. The author has contributed to research in topics: Computer science & Automatic summarization. The author has an hindex of 5, co-authored 11 publications receiving 317 citations.
Papers
More filters
Proceedings ArticleDOI
Evaluating Models’ Local Decision Boundaries via Contrast Sets
Matt Gardner,Yoav Artzi,Victoria Basmov,Jonathan Berant,Ben Bogin,Sihao Chen,Pradeep Dasigi,Dheeru Dua,Yanai Elazar,Ananth Gottumukkala,Nitish Gupta,Hannaneh Hajishirzi,Gabriel Ilharco,Daniel Khashabi,Kevin Lin,Jiangming Liu,Nelson F. Liu,Phoebe Mulcaire,Qiang Ning,Sameer Singh,Noah A. Smith,Sanjay Subramanian,Reut Tsarfaty,Eric Wallace,Ally Zhang,Ben Zhou +25 more
TL;DR: A more rigorous annotation paradigm for NLP that helps to close systematic gaps in the test data, and recommends that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets.
Proceedings ArticleDOI
Seeing Things from a Different Angle: Discovering Diverse Perspectives about Claims.
TL;DR: A thorough analysis of the dataset is provided to highlight key underlying language understanding challenges, and it is shown that human baselines across multiple subtasks far outperform ma-chine baselines built upon state-of-the-art NLP techniques.
Posted Content
Evaluating NLP Models via Contrast Sets
Matt Gardner,Yoav Artzi,Victoria Basmova,Jonathan Berant,Ben Bogin,Sihao Chen,Pradeep Dasigi,Dheeru Dua,Yanai Elazar,Ananth Gottumukkala,Nitish Gupta,Hanna Hajishirzi,Gabriel Ilharco,Daniel Khashabi,Kevin Lin,Jiangming Liu,Nelson F. Liu,Phoebe Mulcaire,Qiang Ning,Sameer Singh,Noah A. Smith,Sanjay Subramanian,Reut Tsarfaty,Eric Wallace,Ally Zhang,Ben Zhou +25 more
TL;DR: A new annotation paradigm for NLP is proposed that helps to close systematic gaps in the test data, and it is recommended that after a dataset is constructed, the dataset authors manually perturb the test instances in small but meaningful ways that change the gold label, creating contrast sets.
Posted Content
Evaluating Models' Local Decision Boundaries via Contrast Sets.
Matt Gardner,Yoav Artzi,Victoria Basmova,Jonathan Berant,Ben Bogin,Sihao Chen,Pradeep Dasigi,Dheeru Dua,Yanai Elazar,Ananth Gottumukkala,Nitish Gupta,Hanna Hajishirzi,Gabriel Ilharco,Daniel Khashabi,Kevin Lin,Jiangming Liu,Nelson F. Liu,Phoebe Mulcaire,Qiang Ning,Sameer Singh,Noah A. Smith,Sanjay Subramanian,Reut Tsarfaty,Eric Wallace,Ally Zhang,Ben Zhou +25 more
TL;DR: Contrast sets as mentioned in this paper is a new annotation paradigm for NLP that helps to close systematic gaps in the test data, where the dataset authors manually perturb the test instances in small but meaningful ways that change the gold label, creating contrast sets.
Proceedings ArticleDOI
Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection
TL;DR: This work learns a discriminative correction model by generating alternative candidate summaries where named entities and quantities in the generated summary are replaced with ones with compatible semantic types from the source document to correct extrinsic hallucinations.