X
Xi Hongsheng
Researcher at University of Science and Technology of China
Publications - 47
Citations - 164
Xi Hongsheng is an academic researcher from University of Science and Technology of China. The author has contributed to research in topics: Markov process & Markov chain. The author has an hindex of 6, co-authored 47 publications receiving 155 citations.
Papers
More filters
Proceedings ArticleDOI
A Markov Game Theory-Based Risk Assessment Model for Network Information System
TL;DR: An automatic generated reinforcement scheme is proposed which will provide a great convenience to the system administrator and all of the possible risk in the future will impact on the present risk assessment.
Proceedings ArticleDOI
A Novel Approach to Network Security Situation Awareness Based on Multi-Perspective Analysis
TL;DR: This paper proposes a novel approach to NSSA model that uses the description of security attacks, vulnerabilities and security services to evaluate current network security situation and adopts a multi-perspective analysis.
Proceedings ArticleDOI
Application of CLIPS Expert System to Malware Detection System
TL;DR: A malware detection system based on expert systems that integrates signature-based analysis and anomaly-detection technique together and can detect not only known malware, but some zero-day attacks using known techniques and also malware adopting low-level techniques, such as polymorphic and packer.
Journal ArticleDOI
Performance optimization of continuous-time Markov control processes based on performance potentials
Tang Hao,Xi Hongsheng,Yin Baoqun +2 more
TL;DR: Average-cost optimization problems for a class of continuous-time Markov control processes with a compact action set with average-cost optimality equation derived and the existence of its solution established are studied.
Journal ArticleDOI
Error bounds of optimization algorithms for semi-Markov decision processes
Tang Hao,Yin Baoqun,Xi Hongsheng +2 more
TL;DR: This work introduces an α-uniformized Markov chain (UMC) for a semi-Markov decision process (SMDP) via A α and a uniformized parameter, and derives the error bounds for a potential-based policy-iteration algorithm and a value-iterations algorithm, respectively, when there exist various calculation errors.