scispace - formally typeset
E

Eric P. Xing

Researcher at Carnegie Mellon University

Publications -  725
Citations -  48035

Eric P. Xing is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Inference & Topic model. The author has an hindex of 99, co-authored 711 publications receiving 41467 citations. Previous affiliations of Eric P. Xing include Microsoft & Intel.

Papers
More filters
Proceedings Article

Smoothing proximal gradient method for general structured sparse learning

TL;DR: This paper proposes a general optimization approach, called smoothing proximal gradient method, which can solve the structured sparse regression problems with a smooth convex loss and a wide spectrum of structured-sparsity-inducing penalties.
Posted Content

Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters

TL;DR: Poseidon as mentioned in this paper exploits the layered model structures in DL programs to overlap communication and computation, reducing bursty network communication, and uses a hybrid communication scheme that optimizes the number of bytes required to synchronize each layer, according to layer properties and the total number of machines.
Proceedings Article

Parallel Markov Chain Monte Carlo for Nonparametric Mixture Models

TL;DR: In this article, auxiliary variable representations for the Dirichlet process and the hierarchical DPs are described to perform MCMC using the correct equilibrium distribution, in a distributed manner, allowing scalable inference without the deterioration in estimate quality that accompanies existing methods.
Posted Content

Integrating Document Clustering and Topic Modeling

TL;DR: The authors proposed a multi-grain clustering topic model (MGCTM) which integrates document clustering and topic modeling into a unified framework and jointly performs the two tasks to achieve the overall best performance.
Proceedings ArticleDOI

Self-Training for Jointly Learning to Ask and Answer Questions

TL;DR: This work proposes a self-training method for jointly learning to ask as well as answer questions, leveraging unlabeled text along with labeled question answer pairs for learning, and demonstrates significant improvements over a number of established baselines.