Y
Yuxiang Wu
Researcher at University College London
Publications - 23
Citations - 2075
Yuxiang Wu is an academic researcher from University College London. The author has contributed to research in topics: Computer science & Question answering. The author has an hindex of 9, co-authored 16 publications receiving 991 citations. Previous affiliations of Yuxiang Wu include Hong Kong University of Science and Technology.
Papers
More filters
Posted Content
Language Models as Knowledge Bases
Fabio Petroni,Tim Rocktäschel,Patrick S. H. Lewis,Anton Bakhtin,Yuxiang Wu,Alexander H. Miller,Sebastian Riedel +6 more
TL;DR: An in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models finds that BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge.
Proceedings ArticleDOI
Language Models as Knowledge Bases
Fabio Petroni,Tim Rocktäschel,Patrick S. H. Lewis,Anton Bakhtin,Yuxiang Wu,Alexander H. Miller,Sebastian Riedel +6 more
TL;DR: This article presented an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models.
Proceedings ArticleDOI
End-to-end adversarial memory network for cross-domain sentiment classification
TL;DR: An end-to-end Adversarial Memory Network (AMN) is introduced for cross-domain sentiment classification that can automatically capture the pivots using an attention mechanism and can significantly outperform state-of-the-art methods.
Proceedings Article
Learning to extract coherent summary via deep reinforcement learning
Yuxiang Wu,Baotian Hu +1 more
TL;DR: This paper proposed a neural coherence model to capture the cross-sentence semantic and syntactic coherence patterns, which obviates the need for feature engineering and can be trained in an end-to-end fashion using unlabeled data.
Posted Content
How Context Affects Language Models' Factual Predictions
Fabio Petroni,Patrick S. H. Lewis,Aleksandra Piktus,Tim Rocktäschel,Yuxiang Wu,Alexander H. Miller,Sebastian Riedel +6 more
TL;DR: This paper reports that augmenting pre-trained language models in this way dramatically improves performance and that the resulting system, despite being unsupervised, is competitive with a supervised machine reading baseline.