scispace - formally typeset
C

Chaowei Xiao

Researcher at Nvidia

Publications -  87
Citations -  5822

Chaowei Xiao is an academic researcher from Nvidia. The author has contributed to research in topics: Computer science & Robustness (computer science). The author has an hindex of 23, co-authored 45 publications receiving 3639 citations. Previous affiliations of Chaowei Xiao include University of Michigan & Tsinghua University.

Papers
More filters
Proceedings ArticleDOI

Robust Physical-World Attacks on Deep Learning Visual Classification

TL;DR: This work proposes a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints.
Proceedings ArticleDOI

Tagoram: real-time tracking of mobile RFID tags to high precision using COTS devices

TL;DR: Differential Augmented Hologram (DAH) is proposed which will facilitate the instant tracking of the mobile RFID tag to a high precision and devise a comprehensive solution to accurately recover the tag's moving trajectories and its locations.
Proceedings ArticleDOI

Generating Adversarial Examples with Adversarial Networks

TL;DR: In this paper, the authors proposed AdvGAN to generate adversarial examples with Generative Adversarial Networks (GANs), which can learn and approximate the distribution of original instances.
Proceedings ArticleDOI

Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving

TL;DR: This work performs the first security study of LiDAR-based perception in AV settings, and designs an algorithm that combines optimization and global sampling, which improves the attack success rates to around 75%.
Posted Content

Spatially Transformed Adversarial Examples

TL;DR: In this paper, the authors focus on a different type of perturbation, namely spatial transformation, as opposed to manipulating the pixel values directly as in prior works, and show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems.