scispace - formally typeset
Y

Yuan Lin

Researcher at University of Waterloo

Publications -  25
Citations -  304

Yuan Lin is an academic researcher from University of Waterloo. The author has contributed to research in topics: Cruise control & Cooperative Adaptive Cruise Control. The author has an hindex of 7, co-authored 19 publications receiving 132 citations. Previous affiliations of Yuan Lin include Nanchang University & Virginia Tech.

Papers
More filters
Journal ArticleDOI

Comparison of Deep Reinforcement Learning and Model Predictive Control for Adaptive Cruise Control

TL;DR: This study compares Deep Reinforcement Learning (DRL) and Model Predictive Control (MPC) for Adaptive Cruise Control (ACC) design in car-following scenarios and finds the DRL-trained policy performs better when the modeled errors are large while having similar performances as MPC when the modeling errors are small.
Journal ArticleDOI

Cooperative Adaptive Cruise Control With Adaptive Kalman Filter Subject to Temporary Communication Loss

TL;DR: A control algorithm with an adaptive Kalman filter estimating the acceleration of a preceding vehicle is proposed, and the estimated acceleration is implemented as a feedforward signal in the ego-vehicle CACC controller in case of communication loss.
Posted Content

Anti-Jerk On-Ramp Merging Using Deep Reinforcement Learning

TL;DR: Deep Reinforcement Learning is used here for decentralized decision-making and longitudinal control for high-speed on-ramp merging and the relationship between collision avoidance for safety and jerk minimization for passenger comfort in the multi-objective reward function is investigated by obtaining the Pareto front.
Proceedings ArticleDOI

Longitudinal Dynamic versus Kinematic Models for Car-Following Control Using Deep Reinforcement Learning

TL;DR: In this paper, the authors investigate the feasibility of applying DRL controllers trained using vehicle kinematic models to more realistic driving control with vehicle dynamics, i.e., Adaptive Cruise Control (ACC).
Proceedings ArticleDOI

Longitudinal Dynamic versus Kinematic Models for Car-Following Control Using Deep Reinforcement Learning

TL;DR: This work redesigns the DRL framework to accommodate the acceleration delay and acceleration command dynamics by adding the delayed control inputs and the actual vehicle acceleration to the reinforcement learning environment state, respectively and shows that the redesigned DRL controller results in near-optimal control performance of car following with vehicle dynamics considered when compared with dynamic programming solutions.