scispace - formally typeset
Y

Yuexin Wu

Researcher at Carnegie Mellon University

Publications -  40
Citations -  2017

Yuexin Wu is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Computer science & Search algorithm. The author has an hindex of 15, co-authored 27 publications receiving 1421 citations. Previous affiliations of Yuexin Wu include Microsoft & Tsinghua University.

Papers
More filters
Proceedings ArticleDOI

Deep Learning for Extreme Multi-label Text Classification

TL;DR: This paper presents the first attempt at applying deep learning to XMTC, with a family of new Convolutional Neural Network models which are tailored for multi-label classification in particular.
Proceedings Article

Analogical Inference for Multi-relational Embeddings

TL;DR: This paper proposed a novel framework for optimizing the latent representations with respect to the \textit{analogical} properties of the embedded entities and relations by formulating the learning objective in a differentiable fashion.
Proceedings Article

Review Networks for Caption Generation

Abstract: We propose a novel extension of the encoder-decoder framework, called a review network. The review network is generic and can enhance any existing encoder- decoder model: in this paper, we consider RNN decoders with both CNN and RNN encoders. The review network performs a number of review steps with attention mechanism on the encoder hidden states, and outputs a thought vector after each review step; the thought vectors are used as the input of the attention mechanism in the decoder. We show that conventional encoder-decoders are a special case of our framework. Empirically, we show that our framework improves over state-of- the-art encoder-decoder systems on the tasks of image captioning and source code captioning.
Posted Content

Review Networks for Caption Generation

TL;DR: The review network performs a number of review steps with attention mechanism on the encoder hidden states, and outputs a thought vector after each review step; the thought vectors are used as the input of the attention mechanism in the decoder.
Proceedings ArticleDOI

Unsupervised Cross-lingual Transfer of Word Embedding Spaces.

TL;DR: This paper proposed an unsupervised learning approach that does not require any cross-lingual labeled data and optimizes the transformation functions in both directions simultaneously based on distributional matching as well as minimizing the back-translation losses.