scispace - formally typeset
R

Richard Liaw

Researcher at University of California, Berkeley

Publications -  24
Citations -  2326

Richard Liaw is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Reinforcement learning & Hyperparameter. The author has an hindex of 15, co-authored 22 publications receiving 1322 citations.

Papers
More filters
Proceedings ArticleDOI

Ray: a distributed framework for emerging AI applications

TL;DR: Ray as mentioned in this paper is a distributed system that implements a unified interface that can express both task-parallel and actor-based computations, supported by a single dynamic execution engine and employs a distributed scheduler and a distributed and fault-tolerant store to manage the control state.
Posted Content

Tune: A Research Platform for Distributed Model Selection and Training.

TL;DR: Tune is proposed, a unified framework for model selection and training that provides a narrow-waist interface between training scripts and search algorithms that meets the requirements for a broad range of hyperparameter search algorithms, allows straightforward scaling of search to large clusters, and simplifies algorithm implementation.
Posted Content

RLlib: Abstractions for Distributed Reinforcement Learning

TL;DR: This work argues for distributing RL components in a composable way by adapting algorithms for top-down hierarchical control, thereby encapsulating parallelism and resource requirements within short-running compute tasks, through RLlib: a library that provides scalable software primitives for RL.
Posted Content

Ray RLLib: A Composable and Scalable Reinforcement Learning Library

TL;DR: Ray RLLib as discussed by the authors is a composable reinforcement learning framework that encapsulates parallelism and resource requirements within individual components, which can be achieved by building on top of a flexible task-based programming model.