Z
Ziming Mao
Publications - 5
Citations - 9
Ziming Mao is an academic researcher. The author has contributed to research in topics: Automatic summarization & Computer science. The author has an hindex of 1, co-authored 5 publications receiving 3 citations.
Papers
More filters
Posted Content
DYLE: Dynamic Latent Extraction for Abstractive Long-Input Summarization.
Ziming Mao,Chen Henry Wu,Ansong Ni,Yusen Zhang,Rui Zhang,Tao Yu,Budhaditya Deb,Chenguang Zhu,Ahmed Hassan Awadallah,Dragomir R. Radev +9 more
TL;DR: This paper proposed a dynamic latent extraction for abstractive summarization (LDES) model, which jointly trains an extractor with an abstractor and treats the extracted text snippets as the latent variable.
Posted Content
FeTaQA: Free-form Table Question Answering.
Linyong Nan,Chiachun Hsieh,Ziming Mao,Xi Victoria Lin,Neha Verma,Rui Zhang,Wojciech Kryscinski,Nick Schoelkopf,Riley Kong,Xiangru Tang,Murori Mutuma,Ben Rosand,Isabel Trindade,Renusree Bandaru,Jacob Cunningham,Caiming Xiong,Dragomir R. Radev +16 more
TL;DR: FeTaQA as discussed by the authors is a table question answering dataset with 10k Wikipedia-based pairs, where answers are human-generated explanations involving entities and their high-level relations.
Posted Content
Summ^N: A Multi-Stage Summarization Framework for Long Input Dialogues and Documents.
Yusen Zhang,Ansong Ni,Ziming Mao,Chen Henry Wu,Chenguang Zhu,Budhaditya Deb,Ahmed Hassan Awadallah,Dragomir R. Radev,Rui Zhang +8 more
TL;DR: Summ^N as mentioned in this paper generates the coarse summary in multiple stages and then produces the final fine-grained summary based on them, which can deal with both documents and dialogues and can be used on top of any underlying backbone abstractive summarization model.
Posted Content
Investigating Crowdsourcing Protocols for Evaluatingthe Factual Consistency of Summaries
Xiangru Tang,Alexander R. Fabbri,Ziming Mao,Griffin Adams,Borui Wang,Haoran Li,Yashar Mehdad,Dragomir R. Radev +7 more
TL;DR: This paper used the rating-based Likert scale and ranking-based Best-Worst Scaling (BWS) protocols to evaluate the factual consistency of summaries and found that BWS provides a more reliable measure of summary quality across datasets.
Posted Content
Investigating Crowdsourcing Protocols for Evaluating the Factual Consistency of Summaries.
Xiangru Tang,Alexander R. Fabbri,Ziming Mao,Griffin Adams,Borui Wang,Haoran Li,Yashar Mehdad,Dragomir R. Radev +7 more
TL;DR: The authors used the rating-based Likert scale and ranking-based Best-Worst Scaling (BWS) protocols to evaluate the factual consistency of summaries and found that BWS provides a more reliable measure of summary quality across datasets.