scispace - formally typeset
F

Furu Wei

Researcher at Microsoft

Publications -  264
Citations -  20189

Furu Wei is an academic researcher from Microsoft. The author has contributed to research in topics: Automatic summarization & Sentence. The author has an hindex of 62, co-authored 264 publications receiving 14170 citations. Previous affiliations of Furu Wei include Beijing Institute of Technology & IBM.

Papers
More filters
Proceedings ArticleDOI

Learning Sentiment-Specific Word Embedding for Twitter Sentiment Classification

TL;DR: Three neural networks are developed to effectively incorporate the supervision from sentiment polarity of text (e.g. sentences or tweets) in their loss functions and the performance of SSWE is improved by concatenating SSWE with existing feature set.
Proceedings Article

Unified Language Model Pre-training for Natural Language Understanding and Generation

TL;DR: UniLM as mentioned in this paper is a unified pre-trained language model that can be fine-tuned for both natural language understanding and generation tasks, achieving state-of-the-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement).
Posted Content

Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks

TL;DR: This paper proposes a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags detected in images as anchor points to significantly ease the learning of alignments.
Posted Content

VL-BERT: Pre-training of Generic Visual-Linguistic Representations

TL;DR: A new pre-trainable generic representation for visual-linguistic tasks, called Visual-Linguistic BERT (VL-BERT), which adopts the simple yet powerful Transformer model as the backbone, and extends it to take both visual and linguistic embedded features as input.
Proceedings ArticleDOI

Adaptive Recursive Neural Network for Target-dependent Twitter Sentiment Classification

TL;DR: AdaRNN adaptively propagates the sentiments of words to target depending on the context and syntactic relationships between them and it is shown that AdaRNN improves the baseline methods.