E
Eric P. Xing
Researcher at Carnegie Mellon University
Publications - 725
Citations - 48035
Eric P. Xing is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Inference & Topic model. The author has an hindex of 99, co-authored 711 publications receiving 41467 citations. Previous affiliations of Eric P. Xing include Microsoft & Intel.
Papers
More filters
Posted Content
Texar: A Modularized, Versatile, and Extensible Toolkit for Text Generation
Zhiting Hu,Haoran Shi,Bowen Tan,Wentao Wang,Zichao Yang,Tiancheng Zhao,Junxian He,Lianhui Qin,Di Wang,Xuezhe Ma,Zhengzhong Liu,Xiaodan Liang,Wangrong Zhu,Devendra Singh Sachan,Eric P. Xing +14 more
TL;DR: Texar as mentioned in this paper is an open-source toolkit aiming to support a broad set of text generation tasks that transform any inputs into natural language, such as machine translation, summarization, dialog, content manipulation, and so forth.
Proceedings Article
Orthogonality-Promoting Distance Metric Learning: Convex Relaxation and Theoretical Analysis
TL;DR: In this paper, a convex relaxation of the original non-convex optimization problem is proposed to achieve the global optimal, and a formal analysis on OPR's capability of promoting balancedness is provided.
Posted Content
Stackelberg GAN: Towards Provable Minimax Equilibrium via Multi-Generator Architectures
TL;DR: The proposed Stackelberg GAN performs well experimentally in both synthetic and real-world datasets, improving Fr\'echet Inception Distance by $14.61\% over the previous multi-generator GANs on the benchmark datasets.
Proceedings ArticleDOI
A Constituent-Centric Neural Architecture for Reading Comprehension.
Pengtao Xie,Eric P. Xing +1 more
TL;DR: A constituent-centric neural architecture is designed where the generation of candidate answers and their representation learning are both based on constituents and guided by the parse tree, which contributes to better representation learning of the candidate answers.
Posted Content
Fault Tolerance in Iterative-Convergent Machine Learning
TL;DR: In this paper, the authors developed a general framework to quantify the effects of calculation errors on iterative-convergent algorithms and use this framework to design new strategies for checkpoint-based fault tolerance.