scispace - formally typeset
W

Wenxiao Wang

Researcher at Tsinghua University

Publications -  16
Citations -  524

Wenxiao Wang is an academic researcher from Tsinghua University. The author has contributed to research in topics: Computer science & Stability (learning theory). The author has an hindex of 4, co-authored 6 publications receiving 141 citations.

Papers
More filters
Proceedings ArticleDOI

The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks

TL;DR: It is theoretically prove that a model's predictive power and its vulnerability to inversion attacks are indeed two sides of the same coin, and highly predictive models are able to establish a strong correlation between features and labels, which coincides exactly with what an adversary exploits to mount the attacks.
Proceedings ArticleDOI

Masked Autoencoders for Point Cloud Self-supervised Learning

TL;DR: A simple architecture entirely based on standard Transformers can surpass dedicated Transformer models from supervised learning and inspires the feasibility of applying unified architectures from languages and images to the point cloud.
Proceedings ArticleDOI

REFIT: a Unified Watermark Removal Framework for Deep Learning Systems with Limited Data

TL;DR: The experimental results demonstrate that fine-tuning based watermark removal attacks could pose real threats to the copyright of pre-trained models, and highlight the importance of further investigating the watermarking problem and proposing more robust watermark embedding schemes against the attacks.
Journal ArticleDOI

Can AI-Generated Text be Reliably Detected?

TL;DR: This paper showed that paraphrasing attacks can break a whole range of detectors, including ones using watermarking schemes as well as neural network-based detectors and zero-shot classifiers, and they also provided a theoretical impossibility result indicating that even the best-possible detector may only perform marginally better than a random classifier.
Posted Content

The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks

TL;DR: Zhang et al. as mentioned in this paper leverage partial public information to learn a distributional prior via generative adversarial networks and use it to guide the inversion process, which can invert deep neural networks with high success rates.