R
Richard Liaw
Researcher at University of California, Berkeley
Publications - 24
Citations - 2326
Richard Liaw is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Reinforcement learning & Hyperparameter. The author has an hindex of 15, co-authored 22 publications receiving 1322 citations.
Papers
More filters
Proceedings ArticleDOI
Ray: a distributed framework for emerging AI applications
Philipp Moritz,Robert Nishihara,Stephanie Wang,Alexey Tumanov,Richard Liaw,Eric Liang,Melih Elibol,Zongheng Yang,William Paul,Michael I. Jordan,Ion Stoica +10 more
TL;DR: Ray as mentioned in this paper is a distributed system that implements a unified interface that can express both task-parallel and actor-based computations, supported by a single dynamic execution engine and employs a distributed scheduler and a distributed and fault-tolerant store to manage the control state.
Posted Content
Tune: A Research Platform for Distributed Model Selection and Training.
TL;DR: Tune is proposed, a unified framework for model selection and training that provides a narrow-waist interface between training scripts and search algorithms that meets the requirements for a broad range of hyperparameter search algorithms, allows straightforward scaling of search to large clusters, and simplifies algorithm implementation.
Posted Content
RLlib: Abstractions for Distributed Reinforcement Learning
Eric Liang,Richard Liaw,Philipp Moritz,Robert Nishihara,Roy Fox,Ken Goldberg,Joseph E. Gonzalez,Michael I. Jordan,Ion Stoica +8 more
TL;DR: This work argues for distributing RL components in a composable way by adapting algorithms for top-down hierarchical control, thereby encapsulating parallelism and resource requirements within short-running compute tasks, through RLlib: a library that provides scalable software primitives for RL.
Proceedings Article
RLlib: Abstractions for Distributed Reinforcement Learning
Posted Content
Ray RLLib: A Composable and Scalable Reinforcement Learning Library
Eric Liang,Richard Liaw,Robert Nishihara,Philipp Moritz,Roy Fox,Joseph E. Gonzalez,Ken Goldberg,Ion Stoica +7 more
TL;DR: Ray RLLib as discussed by the authors is a composable reinforcement learning framework that encapsulates parallelism and resource requirements within individual components, which can be achieved by building on top of a flexible task-based programming model.