scispace - formally typeset
E

Eric P. Xing

Researcher at Carnegie Mellon University

Publications -  725
Citations -  48035

Eric P. Xing is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Inference & Topic model. The author has an hindex of 99, co-authored 711 publications receiving 41467 citations. Previous affiliations of Eric P. Xing include Microsoft & Intel.

Papers
More filters
Posted Content

Hybrid Retrieval-Generation Reinforced Agent for Medical Image Report Generation

TL;DR: In this article, a hybrid retrieval-generation reinforced agent (HRGR-Agent) is proposed to achieve structured, robust, and diverse report generation by employing a hierarchical decision-making procedure.
Posted Content

SeDMiD for Confusion Detection: Uncovering Mind State from Time Series Brain Wave Data.

TL;DR: This paper proposes an extension of State Space Model to work with different sources of information together with its learning and inference algorithms and applies this model to decode the mind state of students during lectures based on their brain waves and reaches a significant better results compared to traditional methods.
Journal ArticleDOI

Towards robust partially supervised multi-structure medical image segmentation on small-scale data

TL;DR: In this paper, the authors proposed Vicinal Labels Under Uncertainty (VLUU), a simple yet efficient framework utilizing the human structure similarity for partially supervised medical image segmentation, which transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Proceedings Article

Post-Inference Prior Swapping

TL;DR: Prior swapping as mentioned in this paper leverages the pre-inferred false posterior to efficiently generate accurate posterior samples under arbitrary target priors, which can be used to apply less-costly inference algorithms to certain models, and incorporate new or updated prior information "post-inference".
Posted Content

Word Shape Matters: Robust Machine Translation with Visual Embedding.

TL;DR: A new encoding heuristic of the input symbols for character-level NLP models is introduced: it encodes the shape of each character through the images depicting the letters when printed, expected to improve the robustness of N LP models.