G
Guanya Shi
Researcher at California Institute of Technology
Publications - 43
Citations - 793
Guanya Shi is an academic researcher from California Institute of Technology. The author has contributed to research in topics: Computer science & Control theory. The author has an hindex of 10, co-authored 33 publications receiving 451 citations. Previous affiliations of Guanya Shi include Tsinghua University & Nvidia.
Papers
More filters
Proceedings ArticleDOI
Neural Lander: Stable Drone Landing Control Using Learned Dynamics
Guanya Shi,Xichen Shi,Michael O'Connell,Rose Yu,Kamyar Azizzadenesheli,Animashree Anandkumar,Yisong Yue,Soon-Jo Chung +7 more
TL;DR: A novel deep-learning-based robust nonlinear controller (Neural-Lander) that improves control performance of a quadrotor during landing and is the first DNN-based nonlinear feedback controller with stability guarantees that can utilize arbitrarily large neural nets.
Proceedings ArticleDOI
Neural Lander: Stable Drone Landing Control using Learned Dynamics
Guanya Shi,Xichen Shi,Michael O'Connell,Rose Yu,Kamyar Azizzadenesheli,Animashree Anandkumar,Yisong Yue,Soon-Jo Chung +7 more
TL;DR: In this article, a deep learning-based robust nonlinear controller (Neural Lander) was proposed to improve the performance of a quadrotor during landing by combining a nominal dynamics model with a deep neural network.
Journal ArticleDOI
Microfluidics cell sample preparation for analysis: Advances in efficient cell enrichment and precise single cell capture.
TL;DR: This review summarizes the category of technologies that provide new solutions and creative insights into the two tasks of cell manipulation, with a focus on the latest development in the recent five years by highlighting the representative works.
Proceedings ArticleDOI
Neural-Swarm: Decentralized Close-Proximity Multirotor Control Using Learned Interactions
TL;DR: Experimental results demonstrate that the proposed controller significantly outperforms a baseline nonlinear tracking controller with up to four times smaller worst-case height tracking errors, and empirically demonstrate the ability of the learned model to generalize to larger swarm sizes.
Journal ArticleDOI
Car-following method based on inverse reinforcement learning for autonomous vehicle decision-making:
TL;DR: The reward function R of each driver data is established based on the inverse reinforcement learning algorithm, and r visualization is carried out, and then driving characteristics and following strategies are analyzed and the efficiency of the proposed method is shown by simulation in a highway environment.