X
Xi Victoria Lin
Publications - 12
Citations - 881
Xi Victoria Lin is an academic researcher. The author has contributed to research in topics: Computer science & Paleontology. The author has an hindex of 7, co-authored 12 publications receiving 881 citations.
Papers
More filters
Journal Article
OPT: Open Pre-trained Transformer Language Models
Susan Zhang,Stephen Roller,Naman Goyal,Mikel Artetxe,Moya Chen,Shuohui Chen,Christopher Dewan,Mona Zidan Diab,Xian Li,Xi Victoria Lin,Todor Mihaylov,Myle Ott,Sam Shleifer,Kurt Shuster,Daniel Simig,Punit Singh Koura,Anjali Sridhar,Tianlu Wang,Luke Zettlemoyer +18 more
TL;DR: This work presents Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which they aim to fully and responsibly share with interested researchers.
Proceedings ArticleDOI
Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Jonas Pfeiffer,Naman Goyal,Xi Victoria Lin,Xian Li,James Cross,Sebastian Riedel,Mikel Artetxe +6 more
TL;DR: The authors pre-train the modules of cross-lingual modular models from the start, which not only mitigates the negative interference between languages, but also enables positive transfer.
Proceedings Article
Few-shot Learning with Multilingual Generative Language Models
Xi Victoria Lin,Todor Mihaylov,Mikel Artetxe,Tianlu Wang,Shuohui Chen,Daniel Simig,Myle Ott,Naman Goyal,Shruti Bhosale,Jingfei Du,Ramakanth Pasunuru,Sam Shleifer,Punit Singh Koura,Vishrav Chaudhary,Brian O'Horo,Jeff Wang,Luke Zettlemoyer,Zornitsa Petrova Kozareva,Mona Zidan Diab,Veselin Stoyanov,Xian Li +20 more
TL;DR: The authors train multilingual generative language models on a corpus covering a diverse set of languages, and study their few-and zero-shot learning capabilities in a wide range of tasks, including commonsense reasoning and natural language inference.
Journal ArticleDOI
LEVER: Learning to Verify Language-to-Code Generation with Execution
Ansong Ni,Srinivasan Iyer,Dragomir R. Radev,Veselin Stoyanov,Wen-tau Yih,Sida Wang,Xi Victoria Lin +6 more
TL;DR: LeVER as discussed by the authors learns to verify the generated programs with their execution results by training verifiers to determine whether a program sampled from the LLMs is correct or not based on the natural language input, the program itself and its execution results.
Journal ArticleDOI
FOLIO: Natural Language Reasoning with First-Order Logic
Simeng Han,Hailey Schoelkopf,Yilun Zhao,Zhenting Qi,Martin Riddell,Luke Benson,Lucy Sun,E V Zubova,Yujie Qiao,Matthew Burtell,David Peng,Jonathan Fan,Yixin Liu,Brian Wong,Malcolm Sailor,Ansong Ni,Linyong Nan,Jungo Kasai,Rui Zhang,Shafiq Joty,Alexander R. Fabbri,Wojciech Kryscinski,Xi Victoria Lin,Caiming Xiong,Dragomir R. Radev +24 more
TL;DR: The results show that one of the most capable Large Language Model (LLM) publicly available, GPT-3 davinci, achieves only slightly better than random results with few-shot prompting on a subset of FOLIO, and the model is especially bad at predicting the correct truth values for False and Unknown conclusions.