X
Xuezhi Wang
Researcher at Google
Publications - 67
Citations - 5817
Xuezhi Wang is an academic researcher from Google. The author has contributed to research in topics: Computer science & Robustness (computer science). The author has an hindex of 14, co-authored 43 publications receiving 1055 citations. Previous affiliations of Xuezhi Wang include University of Southern California & Tsinghua University.
Papers
More filters
Journal Article
PaLM: Scaling Language Modeling with Pathways
Aakanksha Chowdhery,Sharan Narang,Jacob Devlin,Maarten Bosma,Gaurav Mishra,Adam Roberts,Paul Barham,Hyung Won Chung,Charles Sutton,Sebastian Gehrmann,Parker Schuh,Kensen Shi,Sasha Tsvyashchenko,Joshua Maynez,Abhishek Rao,Parker Barnes,Yi Tay,Noam Shazeer,Velu Prabhakaran,Emily Reif,Nan Du,B. C. Hutchinson,Reiner Pope,James Bradbury,Jacob Austin,Michael Isard,Guy Gur-Ari,Peng Yin,Toju Duke,Anselm Levskaya,Sanjay Ghemawat,Sunipa Dev,Henryk Michalewski,Xavier Garcia,Vedant Misra,Kevin Robinson,L Fedus,Denny Zhou,Daphne Ippolito,David Luan,Hyeontaek Lim,Barret Zoph,Alexander Spiridonov,Ryan Sepassi,David Dohan,Shivani Agrawal,Mark Omernick,Andrew M. Dai,Thanumalayan Sankaranarayana Pillai,Marie Pellat,Aitor Lewkowycz,Erica Oliveira Moreira,Rewon Child,Oleksandr Polozov,Katherine Lee,Zong Tuan Zhou,Xuezhi Wang,Brennan Saeta,Mark Díaz,Orhan Firat,M. Catasta,Jason Loh Seong Wei,Kathleen S. Meier-Hellstern,Douglas Eck,Jeffrey Dean,Slav Petrov,Noah Fiedel +66 more
TL;DR: A 540-billion parameter, densely activated, Transformer language model, which is called PaLM achieves breakthrough performance, outperforming the state-of-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark.
Proceedings Article
Chain of Thought Prompting Elicits Reasoning in Large Language Models
Jason Loh Seong Wei,Xuezhi Wang,D. Schuurmans,Maarten Bosma,Ed H. Chi,Fei Xia,Quoc Hoai Le,Denny Zhou +7 more
TL;DR: Experiments on three large language models show that chain-of-thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks.
Posted Content
Underspecification Presents Challenges for Credibility in Modern Machine Learning
Alexander D'Amour,Katherine Heller,Dan Moldovan,Ben Adlam,Babak Alipanahi,Alex Beutel,Christina Chen,Jonathan Deaton,Jacob Eisenstein,Matthew D. Hoffman,Farhad Hormozdiari,Neil Houlsby,Shaobo Hou,Ghassen Jerfel,Alan Karthikesalingam,Mario Lucic,Yi-An Ma,Cory Y. McLean,Diana Mincu,Akinori Mitani,Andrea Montanari,Zachary Nado,Vivek T. Natarajan,Christopher Nielson,Thomas F. Osborne,Rajiv Raman,Kim Ramasamy,Rory Sayres,Jessica Schrouff,Martin G. Seneviratne,Shannon Sequeira,Harini Suresh,Victor Veitch,Max Vladymyrov,Xuezhi Wang,Kellie Webster,Steve Yadlowsky,Taedong Yun,Xiaohua Zhai,D. Sculley +39 more
TL;DR: This work shows the need to explicitly account for underspecification in modeling pipelines that are intended for real-world deployment in any domain, and shows that this problem appears in a wide variety of practical ML pipelines.
Proceedings ArticleDOI
Self-Consistency Improves Chain of Thought Reasoning in Language Models
TL;DR: A simple ensemble strategy, self-consistency, that robustly improves accuracy across a variety of language models and model scales without the need for additional training or auxiliary models is explored.
Journal ArticleDOI
Scaling Instruction-Finetuned Language Models
Hyung Won Chung,Le Hou,Shayne Longpre,Barret Zoph,Yi Tay,William Fedus,Eric Li,Xuezhi Wang,Mostafa Dehghani,Siddhartha Brahma,Albert Webson,Shixiang Gu,Zhuyun Dai,Mirac M. Suzgun,Xinyun Chen,Aakanksha Chowdhery,Dasha Valter,Sharan Narang,Gaurav Mishra,Adams Wei Yu,Vincent Zhao,Yanping Huang,Andrew M. Dai,Hongkun Yu,Slav Petrov,Ed H. Chi,Jeffrey Dean,Jacob Devlin,Adam Roberts,Denny Zhou,Quoc V. Le,Jason Loh Seong Wei +31 more
TL;DR: This result shows that instruction and UL2 continued pre-training are complementary compute-efficient methods to improve the performance of language models without increasing model scale.