scispace - formally typeset
L

Liting Sun

Researcher at University of California, Berkeley

Publications -  98
Citations -  1456

Liting Sun is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Probabilistic logic. The author has an hindex of 15, co-authored 93 publications receiving 751 citations. Previous affiliations of Liting Sun include University of California & University of Science and Technology of China.

Papers
More filters
Posted Content

INTERACTION Dataset: An INTERnational, Adversarial and Cooperative moTION Dataset in Interactive Driving Scenarios with Semantic Maps.

TL;DR: An INTERnational, Adversarial and Cooperative moTION dataset (INTERACTION dataset) in interactive driving scenarios with semantic maps for highly complex behavior such as negotiations, aggressive/irrational decisions and traffic rule violations is presented.
Proceedings ArticleDOI

Probabilistic Prediction of Interactive Driving Behavior via Hierarchical Inverse Reinforcement Learning

TL;DR: The quantitative results show that the proposed approach can accurately predict both the discrete driving decisions such as yield or pass as well as the continuous trajectories.
Proceedings ArticleDOI

A Fast Integrated Planning and Control Framework for Autonomous Driving via Imitation Learning

TL;DR: The results show that the proposed framework can achieve similar performance as sophisticated long-term optimization approaches but with significantly improved computational efficiency.
Posted Content

Probabilistic Prediction of Interactive Driving Behavior via Hierarchical Inverse Reinforcement Learning

TL;DR: In this paper, a probabilistic prediction approach based on hierarchical inverse reinforcement learning (IRL) is proposed to address the uncertainties in human behavior. But it is limited to the case of ramp-merging driving.
Journal ArticleDOI

Efficient Sampling-Based Maximum Entropy Inverse Reinforcement Learning With Application to Autonomous Driving

TL;DR: In this article, an efficient sampling-based maximum-entropy inverse reinforcement learning (IRL) algorithm is proposed to directly learn the reward functions in the continuous domain while considering the uncertainties in demonstrated trajectories from human drivers.