G
Guanjie Zheng
Researcher at Pennsylvania State University
Publications - 37
Citations - 2890
Guanjie Zheng is an academic researcher from Pennsylvania State University. The author has contributed to research in topics: Reinforcement learning & Computer science. The author has an hindex of 13, co-authored 31 publications receiving 1147 citations. Previous affiliations of Guanjie Zheng include Penn State College of Information Sciences and Technology & Shanghai Jiao Tong University.
Papers
More filters
Proceedings ArticleDOI
DRN: A Deep Reinforcement Learning Framework for News Recommendation
TL;DR: A Deep Q-Learning based recommendation framework, which can model future reward explicitly, is proposed, which considers user return pattern as a supplement to click / no click label in order to capture more user feedback information.
Proceedings ArticleDOI
IntelliLight: A Reinforcement Learning Approach for Intelligent Traffic Light Control
TL;DR: This paper proposes a more effective deep reinforcement learning model for traffic light control and tests the method on a large-scale real traffic dataset obtained from surveillance cameras.
Journal ArticleDOI
Revisiting Spatial-Temporal Similarity: A Deep Learning Framework for Traffic Prediction
TL;DR: Wang et al. as discussed by the authors proposed a novel Spatial-Temporal Dynamic Network (STDN), in which a flow gating mechanism is introduced to learn the dynamic similarity between locations, and a periodically shifted attention mechanism is designed to handle long-term periodic temporal shifting.
Posted Content
Revisiting Spatial-Temporal Similarity: A Deep Learning Framework for Traffic Prediction
TL;DR: A novel Spatial-Temporal Dynamic Network (STDN) is proposed, in which a flow gating mechanism is introduced to learn the dynamic similarity between locations, and a periodically shifted attention mechanism is designed to handle long-term periodic temporal shifting.
Proceedings ArticleDOI
PressLight: Learning Max Pressure Control to Coordinate Traffic Signals in Arterial Network
TL;DR: The reward design of the method is well supported by the theory in MP, which can be proved to be maximizing the throughput of the traffic network, i.e., minimizing the overall network travel time and the concise state representation can fully support the optimization of the proposed reward function.