R
Ramakanth Pasunuru
Publications - 7
Citations - 223
Ramakanth Pasunuru is an academic researcher. The author has contributed to research in topics: Computer science. The author has an hindex of 6, co-authored 7 publications receiving 223 citations.
Papers
More filters
Journal ArticleDOI
Augmented Language Models: a Survey
Grégoire Mialon,Roberto Dessì,Maria Lomeli,Christoforos Nalmpantis,Ramakanth Pasunuru,Roberta Raileanu,Baptiste Roziere,Timo Schick,Jane Dwivedi-Yu,A. Celikyilmaz,Edouard Grave,Yann LeCun,Thomas Scialom +12 more
TL;DR: This paper proposed augmented language models (ALMs), which can learn to reason, use tools, and even act, while still performing standard natural language tasks and even outperforming most regular LMs on several benchmarks.
Journal ArticleDOI
OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization
Srinivas Iyer,Xiaojuan Lin,Ramakanth Pasunuru,Todor Mihaylov,Daniel Simig,Ping Yu,Kurt Shuster,Tianlu Wang,Qing Liu,Punit Singh Koura,Xian Li,Brian O'Horo,Gabriel Pereyra,Jeff Wang,Christopher Dewan,A. Celikyilmaz,Luke Zettlemoyer,Veselin Stoyanov +17 more
TL;DR: The OPT-IML Bench as discussed by the authors is a large benchmark for instruction meta-learning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations: to tasks from fully held-out categories, to held out tasks from seen categories, and to heldout instances from seen tasks.
Proceedings ArticleDOI
Complementary Explanations for Effective In-Context Learning
Xi Ye,Srinivasan Iyer,A. Celikyilmaz,Veselin Stoyanov,Gregory Christopher Durrett,Ramakanth Pasunuru +5 more
TL;DR: The authors study the effect of explanations on the performance of large language models and propose a maximal marginal relevance-based exemplar selection approach for constructing exemplar sets that are both relevant as well as complementary.
Proceedings Article
Few-shot Learning with Multilingual Generative Language Models
Xi Victoria Lin,Todor Mihaylov,Mikel Artetxe,Tianlu Wang,Shuohui Chen,Daniel Simig,Myle Ott,Naman Goyal,Shruti Bhosale,Jingfei Du,Ramakanth Pasunuru,Sam Shleifer,Punit Singh Koura,Vishrav Chaudhary,Brian O'Horo,Jeff Wang,Luke Zettlemoyer,Zornitsa Petrova Kozareva,Mona Zidan Diab,Veselin Stoyanov,Xian Li +20 more
TL;DR: The authors train multilingual generative language models on a corpus covering a diverse set of languages, and study their few-and zero-shot learning capabilities in a wide range of tasks, including commonsense reasoning and natural language inference.
Proceedings ArticleDOI
Improving In-Context Few-Shot Learning via Self-Supervised Training
Mingda Chen,Jingfei Du,Ramakanth Pasunuru,Todor Mihaylov,Srinivasan Iyer,Veselin Stoyanov,Zornitsa Petrova Kozareva +6 more
TL;DR: This paper proposes to use self-supervision in an intermediate training stage between pretraining and downstream few-shot usage with the goal to teach the model to perform in-context few shot learning.