Proceedings ArticleDOI
What Language Model to Train if You Have One Million GPU Hours?
Teven Le Scao,Thomas Wang,Daniel Hesslow,Lucile Saulnier,Stas Bekman,Saiful Bari,Stella Biderman,Hady Elsahar,Niklas Muennighoff,Jason Phang,Ofir Press,Colin Raffel,Victor Sanh,Sheng Shen,Lintang A. Sutawika,Jae-Oong Tae,Zheng-Xin Yong,Julien Launay,Iz Beltagy +18 more
- pp 765-782
Reads0
Chats0
TLDR
An ablation study at the billion-parameter scale compar-ing different modeling practices and their impact on zero-shot generalization is performed and the performance of a multilingual model and how it compares to the English-only one is studied.Abstract:
The crystallization of modeling methods around the Transformer architecture has been a boon for practitioners. Simple, well-motivated architectural variations can transfer across tasks and scale, increasing the impact of modeling research. However, with the emergence of state-of-the-art 100B+ parameters models, large language models are increasingly expensive to accurately design and train. Notably, it can be difficult to evaluate how modeling decisions may impact emergent capabilities, given that these capabilities arise mainly from sheer scale alone. In the process of building BLOOM--the Big Science Large Open-science Open-access Multilingual language model--our goal is to identify an architecture and training setup that makes the best use of our 1,000,000 A100-GPU-hours budget. Specifically, we perform an ablation study at the billion-parameter scale comparing different modeling practices and their impact on zero-shot generalization. In addition, we study the impact of various popular pre-training corpora on zero-shot generalization. We also study the performance of a multilingual model and how it compares to the English-only one. Finally, we consider the scaling behaviour of Transformers to choose the target model size, shape, and training setup. All our models and code are open-sourced at https://huggingface.co/bigscience .read more
Citations
More filters
Journal ArticleDOI
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Teven Le Scao,Angela Fan,Christopher Akiki,Elizabeth-Jane Pavlick,Suzana Ilic,Daniel Hesslow,Roman Castagn'e,Alexandra Luccioni,Franccois Yvon,Matthias Gallé,J. S. Tow,Alexander M. Rush,Stella Biderman,Albert Webson,Pawan Sasanka Ammanamanchi,Thomas Wang,Benoît Sagot,Niklas Muennighoff,A. Villanova del Moral,Olatunji Ruwase,R. Bawden,Stas Bekman,Angelina McMillan-Major,Iz Beltagy,Huu Nguyen,Lucile Saulnier,Samson Tan,Pedro Javier Ortiz Suárez,Victor Sanh,Hugo Laurenccon,Yacine Jernite,Julien Launay,Margaret Mitchell,Colin Raffel,Aaron Gokaslan,Adi Simhi,Aitor Soroa,Alham Fikri Aji,Amit Alfassy,Anna Rogers,Ariel Kreisberg Nitzav,Canwen Xu,Chenghao Mou,Chris Chinenye Emezue,Christopher Klamm,Colin D. Leong,Daniel van Strien,David Ifeoluwa Adelani,Dragomir R. Radev,Eduardo G. Ponferrada,Efrat Levkovizh,Ethan Kim,Eyal Natan,Francesco De Toni,Gérard Dupont,G. Kruszewski,Giada Pistilli,Hady Elsahar,Hamza Benyamina,H. Tran,Ian Yu,Idris Abdulmumin,Isaac Johnson,Itziar Gonzalez-Dios,Javier Galiana de la Rosa,Jenny Chim,Jesse Dodge,Jian Zhou,Jonathan Chang,Jorg Frohberg,Josephine Tobing,Joydeep Bhattacharjee,Khalid Almubarak,Kimbo Chen,Kyle Lo,Leandro von Werra,Leon Weber,Long Phan,Loubna Ben Allal,L Tanguy,Manan Dey,Manuel Romero Muñoz,Maraim Masoud,Mar'ia Grandury,Mario vSavsko,Max Huang,Maximin Coavoux,Mayank Singh,Mike Tian-Jian Jiang,Minh Chien Vu,M.A. Jauhar,Mustafa Ghaleb,Nishant Subramani,Nora Kassner,Nurulaqilla Khamis,Olivier Nguyen,Omar Espejel,Ona de Gibert,Paulo Villegas,Peter Henderson,Pierre Colombo,Priscilla Amuok,Quentin Lhoest,Rheza Harliman,Rishi Bommasani,R. L'opez,Salomey Osei,Sampo Pyysalo,Sebastian Nagel,Shamik Bose,Shamsuddeen Hassan Muhammad,Shanya Sharma,Shayne Longpre,Somaieh Nikpoor,Stanislav Silberberg,Suhas Pai,S Zink,Tiago Timponi Torrent,Timo Schick,Tristan Thrush,Valentin Danchev,Vassilina Nikoulina,Veronika Laippala,Violette Lepercq,V. Prabhu,Zaid Alyafeai,Zeerak Talat,Arun Raja,Benjamin Heinzerling,Chenglei Si,Elizabeth Salesky,Sabrina J. Mielke,Wilson Y. Lee,Abheesht Sharma,Andrea Santilli,Antoine Chaffin,Arnaud Stiegler,Debajyoti Datta,Eliza Szczechla,Gunjan Chhablani,Han Wang,Harshit Pandey,Hendrik Strobelt,Jason A. Fries,Jos Rozen,Leo Gao,Lintang A. Sutawika,M Saiful Bari,Maged S. Al-shaibani,Matteo Manica,Nihal V. Nayak,Ryan Teehan,Samuel Albanie,Sheng Shen,Srulik Ben-David,Stephen H. Bach,Taewoon Kim,T. G. Owe Bers,Thibault Févry,Trishala Neeraj,Urmish Thakker,Vikas Raunak,Xiang Tang,Zheng-Xin Yong,Zhiqing Sun,Shaked Brody,Y Uri,Hadar Tojarieh,Adam Roberts,Hyung Won Chung,Jae-Oong Tae,Jason Phang,Ofir Press,Conglong Li,Deepak Narayanan,Hatim Bourfoune,Jared Casper,Jeffrey Thomas Rasley,Maksim Riabinin,Mayank Mishra,Minjia Zhang,Mohammad Shoeybi,Myriam Peyrounette,Nicolas Patry,Nouamane Tazi,Omar Sanseviero,Patrick von Platen,Pierre Cornette,Pierre Franccois Lavall'ee,R. Lacroix,Samyam Rajbhandari,Sanchit Gandhi,Shaden Smith,S. Requena,Suraj Patil,Tim Dettmers,A. D. Baruwa,Anastasia Cheveleva,Anne-Laure Ligozat,Arjun Subramonian,Aur'elie N'ev'eol,Charles Lovering,Daniel H Garrette,Deepak R. Tunuguntla,Ehud Reiter,Ekaterina Taktasheva,E. Voloshina,Eli Bogdanov,Genta Indra Winata,Hailey Schoelkopf,Jan-Christoph Kalo,Jekaterina Novikova,Jessica Zosa Forde,Xiangru Tang,Jungo Kasai,Kenichi Kawamura,Liam Hazan,Marine Carpuat,Miruna-Adriana Clinciu,Najoung Kim,Newton Cheng,Oleg Serikov,Omer Antverg,Oskar van der Wal,Rui Zhang,Ruochen Zhang,Sebastian Gehrmann,Shachar Mirkin,S. Osher Pais,Tatiana Shavrina,Thomas Scialom,Tian Yun,Tomasz Limisiewicz,V. Rieser,Vitaly Protasov,Vladislav Mikhailov,Yada Pruksachatkun,Yonatan Belinkov,Zachary Bamberger,Zdenvek Kasner,Alice Rueda,A. Pestana,Amir Feizpour,Ammar Khan,Amy Faranak,A. Santos,Anthony Hevia,Antigona Unldreaj,Arash Aghagol,Arezoo Abdollahi,Aycha Tammour,Azadeh HajiHosseini,Bahareh Behroozi,Benjamin Olusola Ajibade,Bharat Kumar Saxena,Carlos Muñoz Ferrandis,Danish Contractor,David Lansky,Davis David,Douwe Kiela,Luong An Nguyen,Edward Chwee Kheng. Tan,Emily Baylor,Ezinwanne Ozoani,Fatim Tahirah Mirza,Frankline Ononiwu,Habib Rezanejad,H.A. Jones,Indrani Bhattacharya,Irene Solaiman,Irina Sedenko,Isar Nejadgholi,J. Lawrence Passmore,Joshua Seltzer,Julio Bonis Sanz,Lívia Macedo Dutra,Mairon Samagaio,Maraim Elbadri,M. Mieskes,Marissa Gerchick,Martha Akinlolu,Michael McKenna,Mike Qiu,M. K. K. Ghauri,Mykola Burynok,Nafis Abrar,Nazneen Fatema Rajani,Nour Elkott,Nourhan Fahmy,O. Samuel,Ran An,R. P. Kromann,Ryan Hao,Samira Alizadeh,Sarmad Shubber,Silas L Wang,Sourav Roy,Sylvain Viguier,Thanh-Cong Le,Tobi Oyebade,Trieu Hai Nam Le,Yoyo Yang,Zachary Nguyen,Abhinav Ramesh Kashyap,A. Palasciano,Alison Callahan,Anima Shukla,Antonio Miranda-Escalada,Ayush Kumar Singh,Benjamin Beilharz,Bo Wang,Caio Matheus Fonseca de Brito,Chenxi Zhou,Chirag Jain,Chuxin Xu,Clémentine Fourrier,Daniel Le'on Perin'an,Daniel Molano,Dian Yu,Enrique Peiró Sánchez Manjavacas,Fabio Barth,Florian Fuhrimann,Gabriel Altay,Giyaseddin Bayrak,Helena U Vrabec,Iman I.B. Bello,Isha Dash,Jihyun Kang,John M Giorgi,Jonas Golde,J. Posada,Karthi Sivaraman,Lokesh Bulchandani,Lu Liu,Luisa Shinzato,Madeleine Hahn de Bykhovetz,Maiko Takeuchi,Marc Pàmies,M Andrea Castillo,Marianna Nezhurina,Mario Sanger,Matthias Samwald,Michael Joseph Cullan,Michaela Django Weinberg,M. Wolf,Mina Mihaljcic,Minna Liu,Moritz Freidank,Myungsun Kang,Natasha Seelam,Nathan B Dahlberg,Nicholas Broad,N. Muellner,Pascale Fung,Patricia Haller,R. Chandrasekhar,R. Eisenberg,Robert Martin,Rodrigo L. Canalli,Rosaline Su,Ruisi Su,Samuel Cahyawijaya,Samuele Garda,Shlok S Deshmukh,Shubhanshu Mishra,Sid Kiblawi,Simon Ott,Sinee Sang-aroonsiri,Srishti Kumar,Stefan Schweter,Sushil Pratap Bharati,Tanmay Laud,Th'eo Gigant,Tomoya Kainuma,Wojciech Kusa,Yanis Labrak,Yashasvi Bajaj,Y. Venkatraman,Yifan Xu,Ying Xu,Yunchao Xu,Zhee Xao Tan,Zhong-li Xie,Zifan Ye,Mathilde Bras,Younes Belkada,T. Wolf +386 more
TL;DR: BLOOM as discussed by the authors is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total).
Journal ArticleDOI
A Survey of Large Language Models
Wayne Xin Zhao,Kun Zhou,Junyi Li,Tianyi Tang,Xiaolei Wang,Yupeng Hou,Yingqian Min,Beichen Zhang,Junjie Zhang,Zican Dong,Yifan Du,Chen Yang,Yushuo Chen,Zhongyong Chen,Jinhao Jiang,Ruiyang Ren,Yifan Li,Xinyu Tang,Zikang Liu,Peiyu Liu,Jian-Yun Nie,Ji-Rong Wen +21 more
TL;DR: Recently, a large language model (LLM) as mentioned in this paper has been proposed by pre-training Transformer models over large-scale corpora, showing strong capabilities in solving various NLP tasks.
Proceedings ArticleDOI
GLM-130B: An Open Bilingual Pre-trained Model
Aohan Zeng,Xiao Liu,Zhengxiao Du,Zihan Wang,Hanyu Lai,Ming Ding,Zhuoyi Yang,Yifan Xu,Wendi Zheng,Xiao Xia,Weng Lam Tam,Zixuan Ma,Yufei Xue,Jidong Zhai,Wenguang Chen,Feng Zhang,Yuxiao Dong,Jie Tang +17 more
TL;DR: An attempt to open-source a 100B-scale model at least as good as GPT-3 and unveil how models of such a scale can be successfully pre-trained, including its design choices, training strategies for both efficiency and stability, and engineering efforts is introduced.
Journal ArticleDOI
Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling
Stella Biderman,Hailey Schoelkopf,Quentin Anthony,Herbie Bradley,Eric Hallahan,Mohammad Aflah Khan,Shivanshu Purohit,Usvsn Sai Prashanth,Edward Raff,Lintang A. Sutawika,Oskar van der Wal +10 more
TL;DR: Pythia as discussed by the authors ) is a suite of 16 language models trained on public data seen in the exact same order and ranging in size from 70M to 12B parameters, with 154 checkpoints for each one of the 16 models, alongside tools to download and reconstruct their exact training dataaloaders for further study.
Proceedings ArticleDOI
What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization?
Thomas Wang,Adam Roberts,Daniel Hesslow,Teven Le Scao,Hyung Won Chung,Iz Beltagy,Julien Launay,Colin Raffel +7 more
TL;DR: A large-scale evaluation of modeling choices and their impact on zero-shot generalization of large pretrained Transformer language models focuses on text-to-text models and shows that causal decoder-only models trained on an autoregressive language modeling objective exhibit the strongest zero- shot generalization after purely self-supervised pretraining.
References
More filters
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Posted Content
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel,Noam Shazeer,Adam Roberts,Katherine Lee,Sharan Narang,Michael Matena,Yanqi Zhou,Wei Li,Peter J. Liu +8 more
TL;DR: This systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks and achieves state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more.
Proceedings Article
Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank
Richard Socher,Alex Perelygin,Jean Y. Wu,Jason Chuang,Christopher D. Manning,Andrew Y. Ng,Christopher Potts +6 more
TL;DR: A Sentiment Treebank that includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality, and introduces the Recursive Neural Tensor Network.
Posted Content
SQuAD: 100,000+ Questions for Machine Comprehension of Text
TL;DR: The Stanford Question Answering Dataset (SQuAD) as mentioned in this paper is a reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage.