T
Ted Xiao
Researcher at Google
Publications - 23
Citations - 1149
Ted Xiao is an academic researcher from Google. The author has contributed to research in topics: Computer science & Reinforcement learning. The author has an hindex of 6, co-authored 9 publications receiving 278 citations.
Papers
More filters
Proceedings Article
Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Michael Ahn,Anthony Brohan,Noah Brown,Yevgen Chebotar,Omar Cortes,Byron David,Chelsea Finn,K. Gopalakrishnan,Karol Hausman,Alexander Herzog,Daniel Ho,Jasmine Hsu,Julian Ibarz,Brian Ichter,Alex Irpan,Eric Jang,Rosario Jauregui Ruano,Kyle Jeffrey,Sally Jesmonth,N. J. Joshi,Ryan Julian,Dmitry Kalashnikov,Yuheng Kuang,Kuang-Huei Lee,Sergey Levine,Yao Lu,Linda Luu,Carolina Parada,Peter Pastor,Jornell Quiambao,Kanishka Rao,Jarek Rettinghouse,D. Reyes,Pierre Sermanet,Nicolas Sievers,Clayton Tan,Alexander Toshev,Vincent Vanhoucke,Fei Xia,Ted Xiao,Peng Xu,Sichun Xu,Mengyuan Yan +42 more
TL;DR: It is shown how low-level skills can be combined with large language models so that the language model provides high-level knowledge about the procedures for performing complex and temporally extended instructions, while value functions associated with these skills provide the grounding necessary to connect this knowledge to a particular physical environment.
Proceedings ArticleDOI
Inner Monologue: Embodied Reasoning through Planning with Language Models
Wenrong Huang,Fei Xia,Ted Xiao,H.L. Brode S.K. Chan,Jacky Liang,Peter R. Florence,Andy Zeng,Jonathan James Richard Tompson,Igor Mordatch,Yevgen Chebotar,Pierre Sermanet,Noah Brown,Tomas Jackson,Linda Luu,Sergey Levine,Karol Hausman,Brian Ichter +16 more
TL;DR: In this article , the authors investigate the extent to which large language models can reason over sources of feedback provided through natural language, without any additional training, such as success detection, scene description, and human interaction.
Posted Content
Learning Latent Plans from Play
Corey Lynch,Mohi Khansari,Ted Xiao,Vikash Kumar,Jonathan Tompson,Sergey Levine,Pierre Sermanet +6 more
TL;DR: Play-LMP is introduced, a method designed to handle variability in the LfP setting by organizing it in an embedding space and finding that play-supervised models, unlike their expert-trained counterparts, are more robust to perturbations and exhibit retrying-till-success.
Learning Latent Plans from Play
Corey Lynch,Mohi Khansari,Ted Xiao,Vikash Kumar,Jonathan Tompson,Sergey Levine,Pierre Sermanet +6 more
TL;DR: In this article, a self-supervised method that learns to organize play behaviors in a latent space, then reuse them at test time to achieve specific goals is proposed. But this method requires a large amount of play data.
Journal ArticleDOI
RT-1: Robotics Transformer for Real-World Control at Scale
Anthony Brohan,Noah Brown,Justice Carbajal,Yevgen Chebotar,Joseph Dabis,Chelsea Finn,K. Gopalakrishnan,Karol Hausman,Alexander Herzog,Jasmine Hsu,Julian Ibarz,Brian Ichter,Alex Irpan,Tomas Jackson,Sally Jesmonth,Nikhil J Joshi,Ryan Julian,Dmitry Kalashnikov,Yuheng Kuang,Isabel Leal,Kuang-Huei Lee,Sergey Levine,Yao Lu,Utsav Malla,D. Manjunath,Igor Mordatch,Ofir Nachum,Carolina Parada,Jodilyn Peralta,Emily Perez,Karl Pertsch,Jornell Quiambao,Kanishka Rao,Michael S. Ryoo,Grecia Salazar,Pannag Raghunath Sanketi,Kevin Sayed,Jaspiar Singh,Sumedh Anand Sontakke,Austin Stone,Clayton Tan,Huong Tran,Vincent Vanhoucke,Steve Vega,Quan Vuong,Fei Xia,Ted Xiao,Peng Xu,Sichun Xu,Tianhe Yu,Brianna Zitkovich +50 more
TL;DR: In this article , the authors present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties and verify their conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size and data diversity based on a large-scale data collection on real robots performing real-world tasks.