BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Michael Lewis,Yinhan Liu,Naman Goyal,Marjan Ghazvininejad,Abdelrahman Mohamed,Omer Levy,Veselin Stoyanov,Luke Zettlemoyer +7 more
- pp 7871-7880
Reads0
Chats0
TLDR
BART is presented, a denoising autoencoder for pretraining sequence-to-sequence models, which matches the performance of RoBERTa on GLUE and SQuAD, and achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks.Abstract:Â
We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and other recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa on GLUE and SQuAD, and achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 3.5 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also replicate other pretraining schemes within the BART framework, to understand their effect on end-task performance.read more
Citations
More filters
Proceedings ArticleDOI
Discrete Optimization for Unsupervised Sentence Summarization with Word-Level Extraction
TL;DR: This paper proposed an unsupervised objective function, consisting of language modeling and semantic similarity metrics, to generate a shorter version of a sentence, while preserving its most important information, which achieved state-of-the-art performance.
Proceedings ArticleDOI
Re2G: Retrieve, Rerank, Generate
Michael Glass,Gaetano Rossiello,Md. Faisal Mahbub Chowdhury,Ankita Rajaram Naik,Pengshan Cai,Alfio Gliozzo +5 more
TL;DR: This article proposed Re2G, which combines both neural initial retrieval and re-ranking into a BART-based sequence-to-sequence generation model for zero-shot slot filling, question answering, fact checking and dialog.
Journal ArticleDOI
ChatGPT Beyond English: Towards a Comprehensive Evaluation of Large Language Models in Multilingual Learning
Viet Dac Lai,Nghia Trung Ngo,Amir Pouran Ben Veyseh,Hieu Man Duc Trong,Franck Dernoncourt,Trung D. Bui,Thien Nguyen +6 more
TL;DR: This article evaluated ChatGPT on 7 different tasks, covering 37 diverse languages with high, medium, low, and extremely low resources, and compared the performance of different models and languages, calling for further research to develop better models and understanding for multilingual learning.
Proceedings ArticleDOI
Path Language Modeling over Knowledge Graphsfor Explainable Recommendation
TL;DR: A novel Path Language Modeling Recommendation (PLM-Rec) framework is proposed, learning a language model over KG paths consisting of entities and edges, which unifies recommendation and explanation in a single step and fulfills them simultaneously.
Proceedings ArticleDOI
MuCGEC: a Multi-Reference Multi-Source Evaluation Dataset for Chinese Grammatical Error Correction
TL;DR: This paper presents MuCGEC, a multi-reference multi-source evaluation dataset for Chinese Grammatical Error Correction (CGEC), consisting of 7,063 sentences collected from three Chinese-as-a-Second-Language (CSL) learner sources, and conducts experiments with two mainstream CGEC models, both enhanced with large pretrained language models.
References
More filters
Proceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Proceedings ArticleDOI
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
TL;DR: BERT as mentioned in this paper pre-trains deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
Posted Content
Efficient Estimation of Word Representations in Vector Space
TL;DR: This paper proposed two novel model architectures for computing continuous vector representations of words from very large data sets, and the quality of these representations is measured in a word similarity task and the results are compared to the previously best performing techniques based on different types of neural networks.
Posted Content
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu,Myle Ott,Naman Goyal,Jingfei Du,Mandar Joshi,Danqi Chen,Omer Levy,Michael Lewis,Luke Zettlemoyer,Veselin Stoyanov +9 more
TL;DR: It is found that BERT was significantly undertrained, and can match or exceed the performance of every model published after it, and the best model achieves state-of-the-art results on GLUE, RACE and SQuAD.
Proceedings ArticleDOI
Deep contextualized word representations
Matthew E. Peters,Mark Neumann,Mohit Iyyer,Matt Gardner,Christopher Clark,Kenton Lee,Luke Zettlemoyer +6 more
TL;DR: This paper introduced a new type of deep contextualized word representation that models both complex characteristics of word use (e.g., syntax and semantics), and how these uses vary across linguistic contexts (i.e., to model polysemy).