B
Ben Hutchinson
Researcher at Google
Publications - 56
Citations - 7260
Ben Hutchinson is an academic researcher from Google. The author has contributed to research in topics: Context (language use) & Computer science. The author has an hindex of 20, co-authored 44 publications receiving 2920 citations. Previous affiliations of Ben Hutchinson include University of Edinburgh.
Papers
More filters
Journal ArticleDOI
Advances and open problems in federated learning
Peter Kairouz,H. Brendan McMahan,Brendan Avent,Aurélien Bellet,Mehdi Bennis,Arjun Nitin Bhagoji,Kallista Bonawitz,Zachary Charles,Graham Cormode,Rachel Cummings,Rafael G. L. D'Oliveira,Hubert Eichner,Salim El Rouayheb,David Evans,Josh Gardner,Zachary Garrett,Adrià Gascón,Badih Ghazi,Phillip B. Gibbons,Marco Gruteser,Zaid Harchaoui,Chaoyang He,Lie He,Zhouyuan Huo,Ben Hutchinson,Justin Hsu,Martin Jaggi,Tara Javidi,Gauri Joshi,Mikhail Khodak,Jakub Konecní,Aleksandra Korolova,Farinaz Koushanfar,Sanmi Koyejo,Tancrède Lepoint,Yang Liu,Prateek Mittal,Mehryar Mohri,Richard Nock,Ayfer Ozgur,Rasmus Pagh,Hang Qi,Daniel Ramage,Ramesh Raskar,Mariana Raykova,Dawn Song,Weikang Song,Sebastian U. Stich,Ziteng Sun,Ananda Theertha Suresh,Florian Tramèr,Praneeth Vepakomma,Jianyu Wang,Li Xiong,Zheng Xu,Qiang Yang,Felix X. Yu,Han Yu,Sen Zhao +58 more
TL;DR: In this article, the authors describe the state-of-the-art in the field of federated learning from the perspective of distributed optimization, cryptography, security, differential privacy, fairness, compressed sensing, systems, information theory, and statistics.
Posted Content
Advances and Open Problems in Federated Learning
Peter Kairouz,H. Brendan McMahan,Brendan Avent,Aurélien Bellet,Mehdi Bennis,Arjun Nitin Bhagoji,Kallista Bonawitz,Zachary Charles,Graham Cormode,Rachel Cummings,Rafael G. L. D'Oliveira,Hubert Eichner,Salim El Rouayheb,David Evans,Josh Gardner,Zachary Garrett,Adrià Gascón,Badih Ghazi,Phillip B. Gibbons,Marco Gruteser,Zaid Harchaoui,Chaoyang He,Lie He,Zhouyuan Huo,Ben Hutchinson,Justin Hsu,Martin Jaggi,Tara Javidi,Gauri Joshi,Mikhail Khodak,Jakub Konečný,Aleksandra Korolova,Farinaz Koushanfar,Sanmi Koyejo,Tancrède Lepoint,Yang Liu,Prateek Mittal,Mehryar Mohri,Richard Nock,Ayfer Ozgur,Rasmus Pagh,Mariana Raykova,Hang Qi,Daniel Ramage,Ramesh Raskar,Dawn Song,Weikang Song,Sebastian U. Stich,Ziteng Sun,Ananda Theertha Suresh,Florian Tramèr,Praneeth Vepakomma,Jianyu Wang,Li Xiong,Zheng Xu,Qiang Yang,Felix X. Yu,Han Yu,Sen Zhao +58 more
TL;DR: Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
Proceedings ArticleDOI
Model Cards for Model Reporting
Margaret Mitchell,Simone Wu,Andrew Zaldivar,Parker Barnes,Lucy Vasserman,Ben Hutchinson,Elena Spitzer,Inioluwa Deborah Raji,Timnit Gebru +8 more
TL;DR: This work proposes model cards, a framework that can be used to document any trained machine learning model in the application fields of computer vision and natural language processing, and provides cards for two supervised models: One trained to detect smiling faces in images, and one training to detect toxic comments in text.
Proceedings ArticleDOI
Model Cards for Model Reporting
Margaret Mitchell,Simone Wu,Andrew Zaldivar,Parker Barnes,Lucy Vasserman,Ben Hutchinson,Elena Spitzer,Inioluwa Deborah Raji,Timnit Gebru +8 more
TL;DR: Model cards as discussed by the authors are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type) that are relevant to the intended application domains.
Journal Article
LaMDA: Language Models for Dialog Applications
Romal Thoppilan,Daniel Adiwardana,Jamie Hall,Noam Shazeer,Apoorv Kulshreshtha,Heng-Tze Cheng,Alicia Jin,Taylor Bos,Leslie Baker,Yu Du,Yaguang Li,Hongrae Lee,Huaixiu Zheng,Amin Ghafouri,Marcelo Menegali,Yanping Huang,Maxim Krikun,Dmitry Lepikhin,James Qin,Dehao Chen,Yuanzhong Xu,Zhifeng Chen,Adam Roberts,Maarten Bosma,Yaoqi Zhou,Chung-Ching Chang,I. A. Krivokon,Willard J. Rusch,Marc Pickett,Kathleen S. Meier-Hellstern,Meredith Ringel Morris,Tulsee Doshi,Renelito Delos Santos,Toju Duke,Johnny Hartz Søraker,Bendert Zevenbergen,Velu Prabhakaran,Mark Díaz,Ben Hutchinson,Kristen Olson,Alejandra Aguirre Molina,Erin Hoffman-John,Josh Lee,Lora Aroyo,Ravindran Rajakumar,Alena Butryna,Matthew Lamm,V. O. Kuzmina,Joseph Fenton,Aaron Cohen,Rachel Bernstein,Raymond C. Kurzweil,Blaise Aguera-Arcas,Claire Cui,Marian Rogers Croak,Ed H. Chi,Quoc Hoai Le +56 more
TL;DR: The authors presented LaMDA: Language Models for Dialog Applications, a family of Transformer-based neural language models specialized for dialog, which have up to 137B parameters and are pre-trained on 1.56T words of public dialog data and web text and demonstrate that fine-tuning with annotated data and enabling the model to consult external knowledge sources can lead to significant improvements towards the two key challenges of safety and factual grounding.