scispace - formally typeset
X

Xuanqing Liu

Researcher at University of California, Los Angeles

Publications -  35
Citations -  2359

Xuanqing Liu is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Robustness (computer science) & Artificial neural network. The author has an hindex of 12, co-authored 33 publications receiving 1047 citations. Previous affiliations of Xuanqing Liu include University of California, Davis.

Papers
More filters
Proceedings ArticleDOI

Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks

TL;DR: Cluster-GCN is proposed, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure and allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy.
Proceedings ArticleDOI

Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks

TL;DR: Cluster-GCN as discussed by the authors is a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure, where at each step, it samples a block of nodes that associate with a dense subgraph and restricts the neighborhood search within this subgraph.
Book ChapterDOI

Towards Robust Neural Networks via Random Self-ensemble

TL;DR: Random Self-Ensemble (RSE) as mentioned in this paper adds random noise layers to the neural network to prevent the strong gradient-based attacks, and ensembles the prediction over random noises to stabilize the performance.
Posted Content

Towards Robust Neural Networks via Random Self-ensemble

TL;DR: This paper proposes a new defense algorithm called Random Self-Ensemble (RSE), which adds random noise layers to the neural network to prevent the strong gradient-based attacks, and ensembles the prediction over random noises to stabilize the performance.
Posted Content

Neural SDE: Stabilizing Neural ODE Networks with Stochastic Noise

TL;DR: It is demonstrated that the Neural SDE network can achieve better generalization than the Neural ODE and is more resistant to adversarial and non-adversarial input perturbations.