Y
Yizhe Zhang
Researcher at Duke University
Publications - 62
Citations - 1882
Yizhe Zhang is an academic researcher from Duke University. The author has contributed to research in topics: Computer science & Markov chain Monte Carlo. The author has an hindex of 17, co-authored 41 publications receiving 1407 citations. Previous affiliations of Yizhe Zhang include Chinese Academy of Sciences & Microsoft.
Papers
More filters
Proceedings ArticleDOI
Joint Embedding of Words and Labels for Text Classification
Guoyin Wang,Chunyuan Li,Wenlin Wang,Yizhe Zhang,Dinghan Shen,Xinyuan Zhang,Ricardo Henao,Lawrence Carin +7 more
TL;DR: This article proposed to view text classification as a label-word joint embedding problem, where each label is embedded in the same space with the word vectors and the attention is learned on a training set of labeled samples to ensure that the relevant words are weighted higher than the irrelevant ones.
Proceedings Article
Adversarial feature matching for text generation
TL;DR: The authors proposed a framework for generating realistic text via adversarial training, which employs a long short-term memory network as generator and a con-volutional network as discriminator.
Proceedings ArticleDOI
Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
Dinghan Shen,Guoyin Wang,Wenlin Wang,Martin Renqiang Min,Qinliang Su,Yizhe Zhang,Chunyuan Li,Ricardo Henao,Lawrence Carin +8 more
TL;DR: This paper conducted a point-by-point comparative study between Simple Word-Embedding-based Models (SWEMs), consisting of parameter-free pooling operations, relative to word-embedding-based RNN/CNN models.
Posted Content
Zero-Shot Learning via Class-Conditioned Deep Generative Models
Wenlin Wang,Yunchen Pu,Vinay Kumar Verma,Kai Fan,Yizhe Zhang,Changyou Chen,Piyush Rai,Lawrence Carin +7 more
TL;DR: In this paper, a deep generative model for learning to predict classes not seen at training time is presented. But unlike most existing methods for this problem, which represent each class as a point (via a semantic embedding), we represent each seen/unseen class using a class-specific latent-space distribution, conditioned on class attributes.
Proceedings Article
Triangle Generative Adversarial Networks
TL;DR: In this article, a triangle generative adversarial network is proposed for cross-domain joint distribution matching, where the training data consists of samples from each domain, and supervision of domain correspondence is provided by only a few paired samples.