Z
Zhiyuan Liu
Researcher at Tsinghua University
Publications - 284
Citations - 22033
Zhiyuan Liu is an academic researcher from Tsinghua University. The author has contributed to research in topics: Computer science & Relationship extraction. The author has an hindex of 50, co-authored 163 publications receiving 14890 citations. Previous affiliations of Zhiyuan Liu include Google & Jiangsu Normal University.
Papers
More filters
Proceedings ArticleDOI
MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction
Xiaozhi Wang,Yulin Chen,Ning Ding,Hao Peng,Zimu Wang,Yankai Lin,Xu Han,Hou Lei,Juanzi Li,Zhiyuan Liu,Peng Li,Jian Zhou +11 more
TL;DR: The MAVEN-ERE dataset as mentioned in this paper contains 103,193 event coreference chains, 1,216,217 temporal relations, 57,992 causal relations, and 15,841 subevent relations.
Proceedings ArticleDOI
A Close Look into the Calibration of Pre-trained Language Models
TL;DR: This article found that pre-trained language models do not learn to become calibrated in training, evidenced by the continual increase in confidence, no matter whether the predictions are correct or not, regardless of the predictions being correct.
Journal ArticleDOI
3D printing of crack-free dense polymer-derived ceramic monoliths and lattice skeletons with improved thickness and mechanical performance
S. S. Xiong,Ji’an Liu,Ji-wei Cao,Ziyong Li,Muhammad Idrees,Xiao Cheng Lin,Zhongyu Long,Zhiyuan Liu,Pei Wang,Chang Yong Liu,Zhangwei Chen +10 more
TL;DR: In this paper , a crack-free SiOC porosities and cracks generated during the pyrolysis process were prevented by using phenolic resin (PR) as an additive and the use of controlled strategy of pyrolysis, for the vat photopolymerization additively manufactured SiOC ceramics.
Journal Article
THUNLP at TAC KBP 2011 in Entity Linking
TL;DR: In this article, a pairwise and listwise learning to rank method was used to create a ranking list of candidates with several kinds of features, including context similarity, term frequency, key entity extraction and WikiPage information.
Proceedings ArticleDOI
COPEN: Probing Conceptual Knowledge in Pre-trained Language Models
TL;DR: COPEN as discussed by the authors is a COnceptual knowledge probing task for pre-trained language models, which is inspired by knowledge representation schemata and conceptual knowledge is fundamental to human cognition and knowledge bases.