Unsupervised Cross-lingual Representation Learning at Scale
Alexis Conneau,Kartikay Khandelwal,Naman Goyal,Vishrav Chaudhary,Guillaume Wenzek,Francisco Guzmán,Edouard Grave,Myle Ott,Luke Zettlemoyer,Veselin Stoyanov +9 more
- pp 8440-8451
Reads0
Chats0
TLDR
It is shown that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks, and the possibility of multilingual modeling without sacrificing per-language performance is shown for the first time.Abstract:
This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6% average accuracy on XNLI, +13% average F1 score on MLQA, and +2.4% F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7% in XNLI accuracy for Swahili and 11.4% for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code and models publicly available.read more
Citations
More filters
Proceedings ArticleDOI
mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer
Linting Xue,Noah Constant,Adam Roberts,Mihir Kale,Rami Al-Rfou,Aditya Siddhant,Aditya Barua,Colin Raffel +7 more
TL;DR: This paper proposed a multilingual variant of T5, mT5, which was pre-trained on a new Common Crawl-based dataset covering 101 languages and achieved state-of-the-art performance on many multilingual benchmarks.
Posted Content
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick,Hinrich Schütze +1 more
TL;DR: This work introduces Pattern-Exploiting Training (PET), a semi-supervised training procedure that reformulates input examples as cloze-style phrases to help language models understand a given task.
Posted Content
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization
TL;DR: The Cross-lingual TRansfer Evaluation of Multilingual Encoders XTREME benchmark is introduced, a multi-task benchmark for evaluating the cross-lingually generalization capabilities of multilingual representations across 40 languages and 9 tasks.
Proceedings ArticleDOI
CamemBERT: a Tasty French Language Model
Louis Martin,Benjamin Muller,Pedro Javier Ortiz Suárez,Yoann Dupont,Laurent Romary,Éric Villemonte de la Clergerie,Djamé Seddah,Benoît Sagot +7 more
TL;DR: This paper investigates the feasibility of training monolingual Transformer-based language models for other languages, taking French as an example and evaluating their language models on part-of-speech tagging, dependency parsing, named entity recognition and natural language inference tasks.
Journal ArticleDOI
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Teven Le Scao,Angela Fan,Christopher Akiki,Elizabeth-Jane Pavlick,Suzana Ilic,Daniel Hesslow,Roman Castagn'e,Alexandra Luccioni,Franccois Yvon,Matthias Gallé,J. S. Tow,Alexander M. Rush,Stella Biderman,Albert Webson,Pawan Sasanka Ammanamanchi,Thomas Wang,Benoît Sagot,Niklas Muennighoff,A. Villanova del Moral,Olatunji Ruwase,R. Bawden,Stas Bekman,Angelina McMillan-Major,Iz Beltagy,Huu Nguyen,Lucile Saulnier,Samson Tan,Pedro Javier Ortiz Suárez,Victor Sanh,Hugo Laurenccon,Yacine Jernite,Julien Launay,Margaret Mitchell,Colin Raffel,Aaron Gokaslan,Adi Simhi,Aitor Soroa,Alham Fikri Aji,Amit Alfassy,Anna Rogers,Ariel Kreisberg Nitzav,Canwen Xu,Chenghao Mou,Chris Chinenye Emezue,Christopher Klamm,Colin D. Leong,Daniel van Strien,David Ifeoluwa Adelani,Dragomir R. Radev,Eduardo G. Ponferrada,Efrat Levkovizh,Ethan Kim,Eyal Natan,Francesco De Toni,Gérard Dupont,G. Kruszewski,Giada Pistilli,Hady Elsahar,Hamza Benyamina,H. Tran,Ian Yu,Idris Abdulmumin,Isaac Johnson,Itziar Gonzalez-Dios,Javier Galiana de la Rosa,Jenny Chim,Jesse Dodge,Jian Zhou,Jonathan Chang,Jorg Frohberg,Josephine Tobing,Joydeep Bhattacharjee,Khalid Almubarak,Kimbo Chen,Kyle Lo,Leandro von Werra,Leon Weber,Long Phan,Loubna Ben Allal,L Tanguy,Manan Dey,Manuel Romero Muñoz,Maraim Masoud,Mar'ia Grandury,Mario vSavsko,Max Huang,Maximin Coavoux,Mayank Singh,Mike Tian-Jian Jiang,Minh Chien Vu,M.A. Jauhar,Mustafa Ghaleb,Nishant Subramani,Nora Kassner,Nurulaqilla Khamis,Olivier Nguyen,Omar Espejel,Ona de Gibert,Paulo Villegas,Peter Henderson,Pierre Colombo,Priscilla Amuok,Quentin Lhoest,Rheza Harliman,Rishi Bommasani,R. L'opez,Salomey Osei,Sampo Pyysalo,Sebastian Nagel,Shamik Bose,Shamsuddeen Hassan Muhammad,Shanya Sharma,Shayne Longpre,Somaieh Nikpoor,Stanislav Silberberg,Suhas Pai,S Zink,Tiago Timponi Torrent,Timo Schick,Tristan Thrush,Valentin Danchev,Vassilina Nikoulina,Veronika Laippala,Violette Lepercq,V. Prabhu,Zaid Alyafeai,Zeerak Talat,Arun Raja,Benjamin Heinzerling,Chenglei Si,Elizabeth Salesky,Sabrina J. Mielke,Wilson Y. Lee,Abheesht Sharma,Andrea Santilli,Antoine Chaffin,Arnaud Stiegler,Debajyoti Datta,Eliza Szczechla,Gunjan Chhablani,Han Wang,Harshit Pandey,Hendrik Strobelt,Jason A. Fries,Jos Rozen,Leo Gao,Lintang A. Sutawika,M Saiful Bari,Maged S. Al-shaibani,Matteo Manica,Nihal V. Nayak,Ryan Teehan,Samuel Albanie,Sheng Shen,Srulik Ben-David,Stephen H. Bach,Taewoon Kim,T. G. Owe Bers,Thibault Févry,Trishala Neeraj,Urmish Thakker,Vikas Raunak,Xiang Tang,Zheng-Xin Yong,Zhiqing Sun,Shaked Brody,Y Uri,Hadar Tojarieh,Adam Roberts,Hyung Won Chung,Jae-Oong Tae,Jason Phang,Ofir Press,Conglong Li,Deepak Narayanan,Hatim Bourfoune,Jared Casper,Jeffrey Thomas Rasley,Maksim Riabinin,Mayank Mishra,Minjia Zhang,Mohammad Shoeybi,Myriam Peyrounette,Nicolas Patry,Nouamane Tazi,Omar Sanseviero,Patrick von Platen,Pierre Cornette,Pierre Franccois Lavall'ee,R. Lacroix,Samyam Rajbhandari,Sanchit Gandhi,Shaden Smith,S. Requena,Suraj Patil,Tim Dettmers,A. D. Baruwa,Anastasia Cheveleva,Anne-Laure Ligozat,Arjun Subramonian,Aur'elie N'ev'eol,Charles Lovering,Daniel H Garrette,Deepak R. Tunuguntla,Ehud Reiter,Ekaterina Taktasheva,E. Voloshina,Eli Bogdanov,Genta Indra Winata,Hailey Schoelkopf,Jan-Christoph Kalo,Jekaterina Novikova,Jessica Zosa Forde,Xiangru Tang,Jungo Kasai,Kenichi Kawamura,Liam Hazan,Marine Carpuat,Miruna-Adriana Clinciu,Najoung Kim,Newton Cheng,Oleg Serikov,Omer Antverg,Oskar van der Wal,Rui Zhang,Ruochen Zhang,Sebastian Gehrmann,Shachar Mirkin,S. Osher Pais,Tatiana Shavrina,Thomas Scialom,Tian Yun,Tomasz Limisiewicz,V. Rieser,Vitaly Protasov,Vladislav Mikhailov,Yada Pruksachatkun,Yonatan Belinkov,Zachary Bamberger,Zdenvek Kasner,Alice Rueda,A. Pestana,Amir Feizpour,Ammar Khan,Amy Faranak,A. Santos,Anthony Hevia,Antigona Unldreaj,Arash Aghagol,Arezoo Abdollahi,Aycha Tammour,Azadeh HajiHosseini,Bahareh Behroozi,Benjamin Olusola Ajibade,Bharat Kumar Saxena,Carlos Muñoz Ferrandis,Danish Contractor,David Lansky,Davis David,Douwe Kiela,Luong An Nguyen,Edward Chwee Kheng. Tan,Emily Baylor,Ezinwanne Ozoani,Fatim Tahirah Mirza,Frankline Ononiwu,Habib Rezanejad,H.A. Jones,Indrani Bhattacharya,Irene Solaiman,Irina Sedenko,Isar Nejadgholi,J. Lawrence Passmore,Joshua Seltzer,Julio Bonis Sanz,Lívia Macedo Dutra,Mairon Samagaio,Maraim Elbadri,M. Mieskes,Marissa Gerchick,Martha Akinlolu,Michael McKenna,Mike Qiu,M. K. K. Ghauri,Mykola Burynok,Nafis Abrar,Nazneen Fatema Rajani,Nour Elkott,Nourhan Fahmy,O. Samuel,Ran An,R. P. Kromann,Ryan Hao,Samira Alizadeh,Sarmad Shubber,Silas L Wang,Sourav Roy,Sylvain Viguier,Thanh-Cong Le,Tobi Oyebade,Trieu Hai Nam Le,Yoyo Yang,Zachary Nguyen,Abhinav Ramesh Kashyap,A. Palasciano,Alison Callahan,Anima Shukla,Antonio Miranda-Escalada,Ayush Kumar Singh,Benjamin Beilharz,Bo Wang,Caio Matheus Fonseca de Brito,Chenxi Zhou,Chirag Jain,Chuxin Xu,Clémentine Fourrier,Daniel Le'on Perin'an,Daniel Molano,Dian Yu,Enrique Peiró Sánchez Manjavacas,Fabio Barth,Florian Fuhrimann,Gabriel Altay,Giyaseddin Bayrak,Helena U Vrabec,Iman I.B. Bello,Isha Dash,Jihyun Kang,John M Giorgi,Jonas Golde,J. Posada,Karthi Sivaraman,Lokesh Bulchandani,Lu Liu,Luisa Shinzato,Madeleine Hahn de Bykhovetz,Maiko Takeuchi,Marc Pàmies,M Andrea Castillo,Marianna Nezhurina,Mario Sanger,Matthias Samwald,Michael Joseph Cullan,Michaela Django Weinberg,M. Wolf,Mina Mihaljcic,Minna Liu,Moritz Freidank,Myungsun Kang,Natasha Seelam,Nathan B Dahlberg,Nicholas Broad,N. Muellner,Pascale Fung,Patricia Haller,R. Chandrasekhar,R. Eisenberg,Robert Martin,Rodrigo L. Canalli,Rosaline Su,Ruisi Su,Samuel Cahyawijaya,Samuele Garda,Shlok S Deshmukh,Shubhanshu Mishra,Sid Kiblawi,Simon Ott,Sinee Sang-aroonsiri,Srishti Kumar,Stefan Schweter,Sushil Pratap Bharati,Tanmay Laud,Th'eo Gigant,Tomoya Kainuma,Wojciech Kusa,Yanis Labrak,Yashasvi Bajaj,Y. Venkatraman,Yifan Xu,Ying Xu,Yunchao Xu,Zhee Xao Tan,Zhong-li Xie,Zifan Ye,Mathilde Bras,Younes Belkada,T. Wolf +386 more
TL;DR: BLOOM as discussed by the authors is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total).
References
More filters
Proceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Proceedings ArticleDOI
Glove: Global Vectors for Word Representation
TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Proceedings ArticleDOI
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
TL;DR: BERT as mentioned in this paper pre-trains deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
Proceedings Article
Distributed Representations of Words and Phrases and their Compositionality
TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Posted Content
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu,Myle Ott,Naman Goyal,Jingfei Du,Mandar Joshi,Danqi Chen,Omer Levy,Michael Lewis,Luke Zettlemoyer,Veselin Stoyanov +9 more
TL;DR: It is found that BERT was significantly undertrained, and can match or exceed the performance of every model published after it, and the best model achieves state-of-the-art results on GLUE, RACE and SQuAD.