S
Sehoon Ha
Researcher at Georgia Institute of Technology
Publications - 72
Citations - 3276
Sehoon Ha is an academic researcher from Georgia Institute of Technology. The author has contributed to research in topics: Computer science & Reinforcement learning. The author has an hindex of 17, co-authored 54 publications receiving 2115 citations. Previous affiliations of Sehoon Ha include Disney Research & Facebook.
Papers
More filters
Posted Content
Soft Actor-Critic Algorithms and Applications
Tuomas Haarnoja,Aurick Zhou,Kristian Hartikainen,George Tucker,Sehoon Ha,Jie Tan,Vikash Kumar,Henry Zhu,Abhishek Gupta,Pieter Abbeel,Sergey Levine +10 more
TL;DR: Soft Actor-Critic (SAC), the recently introduced off-policy actor-critic algorithm based on the maximum entropy RL framework, achieves state-of-the-art performance, outperforming prior on-policy and off- policy methods in sample-efficiency and asymptotic performance.
Journal ArticleDOI
Iterative Training of Dynamic Skills Inspired by Human Coaching Techniques
Sehoon Ha,C. Karen Liu +1 more
TL;DR: This work introduces “control rigs” as an intermediate layer of control module to facilitate the mapping between high-level instructions and low-level control variables, and develops a new sampling-based optimization method, Covariance Matrix Adaptation with Classification (CMA-C), to efficiently compute-control rig parameters.
Posted Content
Learning to Walk via Deep Reinforcement Learning.
TL;DR: In this article, a sample-efficient deep RL algorithm based on maximum entropy RL was proposed to learn walking gaits on a real-world minitaur robot in about two hours.
Journal ArticleDOI
DART: Dynamic Animation and Robotics Toolkit
Jeongseok Lee,Michael X. Grey,Sehoon Ha,Tobias Kunz,Sumit Jain,Yuting Ye,Siddhartha S. Srinivasa,Mike Stilman,C. Karen Liu +8 more
TL;DR: DART (Dynamic Animation and Robotics Toolkit) is a collaborative, cross-platform, open source library that features a multibody dynamic simulator and various kinematic tools for control and motion planning.
Proceedings Article
Learning to Walk via Deep Reinforcement Learning
TL;DR: A sample-efficient deep RL algorithm based on maximum entropy RL that requires minimal per-task tuning and only a modest number of trials to learn neural network policies is proposed and achieves state-of-the-art performance on simulated benchmarks with a single set of hyperparameters.