scispace - formally typeset
X

Xiao Liu

Researcher at National University of Defense Technology

Publications -  7
Citations -  31

Xiao Liu is an academic researcher from National University of Defense Technology. The author has contributed to research in topics: Computer science & Engineering. The author has an hindex of 2, co-authored 4 publications receiving 10 citations.

Papers
More filters
Journal ArticleDOI

Numerical investigation into impact responses of an offshore wind turbine jacket foundation subjected to ship collision

TL;DR: In this paper , the impact responses of the ship-offshore wind turbine jacket foundation system were investigated and the most dangerous impacting cases and vulnerable components were identified, and the effects of impact locations, collision angles and impact velocities on crashing responses, impact energy distributions and interactive deformation mechanisms were studied.
Book ChapterDOI

Poisoning Machine Learning Based Wireless IDSs via Stealing Learning Model

TL;DR: This paper proposes an Adaptive SMOTE (A-SMOTE) algorithm which can adaptively generate new training data points based on few existing ones with labels and introduces a stealing model attack and a novel poisoning strategy to attack against the substitute machine learning model.
Journal ArticleDOI

Model Capacity Vulnerability in Hyper-Parameters Estimation

TL;DR: This paper studies an adversarial vulnerability of model capacity caused by the poisoning on the estimation of model hyper-parameters and implements this vulnerability catering for the polynomial regression model, on which the evading of model-oriented detection is challenging.
Journal ArticleDOI

Imperceptible Adversarial Attack via Invertible Neural Networks

TL;DR: Huang et al. as discussed by the authors introduced a novel Adversarial Attack via Invertible Neural Networks (AdvINN) method to produce robust and imperceptible adversarial examples, which fully takes advantage of the information preservation property of invertible neural networks and thereby generates adversarial example by simultaneously adding class-specific semantic information of the target class and dropping discriminant information of original class.