scispace - formally typeset
Open AccessPosted Content

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses.

Reads0
Chats0
TLDR
In this article, the authors systematically categorize and discuss a wide range of dataset vulnerabilities and exploits, approaches for defending against these threats, and an array of open problems in this space.
Abstract
As machine learning systems grow in scale, so do their training data requirements, forcing practitioners to automate and outsource the curation of training data in order to achieve state-of-the-art performance. The absence of trustworthy human supervision over the data collection process exposes organizations to security vulnerabilities; training data can be manipulated to control and degrade the downstream behaviors of learned models. The goal of this work is to systematically categorize and discuss a wide range of dataset vulnerabilities and exploits, approaches for defending against these threats, and an array of open problems in this space. In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.

read more

Citations
More filters
Proceedings ArticleDOI

Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff

TL;DR: In this paper, strong data augmentations, such as mixup and CutMix, can significantly diminish the threat of poisoning and backdoor attacks without trading off performance, and they further verify the effectiveness of this simple defense against adaptive poisoning methods, and compare to baselines including the popular differentially private SGD (DP-SGD) defense.
Posted Content

Property Inference From Poisoning

TL;DR: In this paper, the authors proposed a poisoning attack that allows the adversary to learn the prevalence in the training data of any property it chooses, which can boost the information leakage significantly and should be considered as a stronger threat model in sensitive applications.
Journal ArticleDOI

Adversarial XAI Methods in Cybersecurity

TL;DR: In this paper, a black-box attack that leverages explainable artificial intelligence (XAI) methods to compromise the confidentiality and privacy properties of underlying classifiers is proposed, which can also facilitate powerful evasion attacks such as poisoning and back door attacks.
Posted Content

What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors.

TL;DR: In this paper, the authors extend the adversarial training framework to instead defend against (training-time) poisoning and backdoor attacks, and they show that this defense withstands adaptive attacks, generalizes to diverse threat models, and incurs a better performance trade-off than previous defenses.
References
More filters
Posted Content

TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems.

TL;DR: TABOR formalizes a trojan detection task as a non-convex optimization problem, and the detection of a Trojan backdoor as the task of resolving the optimization through an objective function, and designs a new objective function that could guide optimization to identify aTrojan backdoor in a more effective fashion.
Posted Content

Recent Advances in Algorithmic High-Dimensional Robust Statistics.

TL;DR: The core ideas and algorithmic techniques in the emerging area of algorithmic high-dimensional robust statistics with a focus on robust mean estimation are introduced and an overview of the approaches that have led to computationally efficient robust estimators for a range of broader statistical tasks are provided.
Proceedings ArticleDOI

Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs

TL;DR: The concept of Universal Litmus Patterns (ULPs) is introduced, which enable one to reveal backdoor attacks by feeding these universal patterns to the network and analyzing the output (i.e., classifying the network as `clean' or `corrupted').
Journal ArticleDOI

Robust Covariance and Scatter Matrix Estimation under Huber's Contamination Model

TL;DR: A new concept called matrix depth is defined and a robust covariance matrix estimator is proposed that is shown to achieve minimax optimal rate under Huber's $\epsilon$-contamination model for estimating covariance/scatter matrices with various structures including bandedness and sparsity.
Proceedings ArticleDOI

Model-Reuse Attacks on Deep Learning Systems

TL;DR: It is demonstrated that malicious primitive models pose immense threats to the security of ML systems, and analytical justification for the effectiveness of model-reuse attacks is provided, which points to the unprecedented complexity of today's primitive models.