S
Shitian Shen
Researcher at North Carolina State University
Publications - 9
Citations - 401
Shitian Shen is an academic researcher from North Carolina State University. The author has contributed to research in topics: Reinforcement learning & Partially observable Markov decision process. The author has an hindex of 6, co-authored 9 publications receiving 321 citations. Previous affiliations of Shitian Shen include Oak Ridge National Laboratory & University of Southern California.
Papers
More filters
Journal ArticleDOI
Anomaly detection in dynamic networks: a survey
Stephen Ranshous,Stephen Ranshous,Shitian Shen,Shitian Shen,Danai Koutra,Steve Harenberg,Steve Harenberg,Christos Faloutsos,Nagiza F. Samatova,Nagiza F. Samatova +9 more
TL;DR: This work focuses on anomaly detection in static graphs, which do not change and are capable of representing only a single snapshot of data, but as real‐world networks are constantly changing, there has been a shift in focus to dynamic graphs,Which evolve over time.
Proceedings ArticleDOI
Reinforcement Learning: the Sooner the Better, or the Later the Better?
Shitian Shen,Min Chi +1 more
TL;DR: This study investigated the impact of both immediate and delayed reward functions on RL-induced policies and empirically evaluated the effectiveness of induced policies within an Intelligent Tutoring System called Deep Thought to show that there was a significant interaction effect between the induced policies and the students' incoming competence.
Proceedings ArticleDOI
Incorporating Student Response Time and Tutor Instructional Interventions into Student Modeling
Chen Lin,Shitian Shen,Min Chi +2 more
TL;DR: Results show that for next-step performance predictions, Intervention-BKT is more effective than BKT; whereas to predict students' post-test scores, including student response time would yield better result than using performance alone.
Proceedings Article
Aim Low: Correlation-Based Feature Selection for Model-Based Reinforcement Learning.
Shitian Shen,Min Chi +1 more
TL;DR: Surprisingly, for each of correlation metrics, the low option significantly outperform its high correlation peer and thus it suggests that low correlation-based feature selection methods are more effective for model-based RL than high ones.
Proceedings ArticleDOI
Improving Learning & Reducing Time: A Constrained Action-Based Reinforcement Learning Approach
TL;DR: This work constructs a general data-driven framework called Constrained Action-based Partially Observable Markov Decision Process (CAPOMDP) to induce effective pedagogical policies and induces two types of policies: CAPOMDPLG using learning gain as reward with the goal of improving students' learning performance, and CAPomDPTime using time as reward for reducing students' time on task.