X
Xiaoyu Cao
Researcher at Duke University
Publications - 39
Citations - 1462
Xiaoyu Cao is an academic researcher from Duke University. The author has contributed to research in topics: Computer science & Robustness (computer science). The author has an hindex of 12, co-authored 29 publications receiving 598 citations. Previous affiliations of Xiaoyu Cao include Iowa State University.
Papers
More filters
Proceedings Article
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
TL;DR: This work performs the first systematic study on local model poisoning attacks to federated learning, assuming an attacker has compromised some client devices, and the attacker manipulates the local model parameters on the compromised client devices during the learning process such that the global model has a large testing error rate.
Proceedings ArticleDOI
Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification
Xiaoyu Cao,Neil Zhenqiang Gong +1 more
TL;DR: In this paper, the authors proposed a region-based classification method for adversarial examples, which is robust to state-of-the-art evasion attacks by adding a small carefully crafted noise to the testing example such that the classifier predicts an incorrect label.
Proceedings ArticleDOI
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping.
TL;DR: This work proposes FLTrust, a new federated learning method in which the service provider itself bootstraps trust and normalization limits the impact of malicious local model updates with large magnitudes.
Posted Content
On Certifying Robustness against Backdoor Attacks via Randomized Smoothing
TL;DR: It is found that existing randomized smoothing methods have limited effectiveness at defending against backdoor attacks, which highlight the needs of new theory and methods to certify robustness againstbackdoor attacks.
Posted Content
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning.
TL;DR: In this paper, the authors perform a systematic study on local model poisoning attacks to federated learning, where an attacker has compromised some client devices, and the attacker manipulates the local model parameters on the compromised client devices during the learning process such that the global model has a large testing error rate.