scispace - formally typeset
Open AccessProceedings ArticleDOI

Backdoor Attack Against Speaker Verification

Reads0
Chats0
TLDR
Zhang et al. as discussed by the authors demonstrate that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data, where poisoned samples from different clusters will contain different triggers (i.e., pre-defined utterances), based on their understanding of verification tasks.
Abstract
Speaker verification has been widely and successfully adopted in many mission-critical areas for user identification. The training of speaker verification requires a large amount of data, therefore users usually need to adopt third-party data (e.g., data from the Internet or third-party data company). This raises the question of whether adopting untrusted third-party data can pose a security threat. In this paper, we demonstrate that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data. Specifically, we design a clustering-based attack scheme where poisoned samples from different clusters will contain different triggers (i.e., pre-defined utterances), based on our understanding of verification tasks. The infected models behave normally on benign samples, while attacker-specified unenrolled triggers will successfully pass the verification even if the attacker has no information about the enrolled speaker. We also demonstrate that existing back-door attacks cannot be directly adopted in attacking speaker verification. Our approach not only provides a new perspective for designing novel attacks, but also serves as a strong baseline for improving the robustness of verification methods. The code for reproducing main results is available at https://github.com/zhaitongqing233/Backdoor-attack-against-speaker-verification.

read more

Citations
More filters
Posted Content

Backdoor Learning: A Survey

TL;DR: This paper summarizes and categorizes existing backdoor attacks and defenses based on their characteristics, and provides a unified framework for analyzing poisoning-based backdoor attacks.
Journal Article

Rethinking the Trigger of Backdoor Attack

TL;DR: This paper demonstrates that many backdoor attack paradigms are vulnerable when the trigger in testing images is not consistent with the one used for training, and proposes a transformation-based attack enhancement to improve the robustness of existing attacks towards transformation- based defense.
Posted Content

DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation

TL;DR: A systematic approach is proposed to discover the optimal policies for defending against different backdoor attacks by comprehensively evaluating 71 state-of-the-art data augmentation functions and envision this framework can be a good benchmark tool to advance future DNN backdoor studies.
Proceedings ArticleDOI

DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation

TL;DR: Wang et al. as mentioned in this paper investigated the effectiveness of data augmentation techniques in mitigating backdoor attacks and enhancing DL models' robustness, and proposed a unified defense solution to fine-tune the infected model and eliminate the effects of the embedded backdoor; another augmentation policy to preprocess input samples and invalidate the triggers during inference.
Journal ArticleDOI

Backdoor Learning: A Survey

TL;DR: Li et al. as mentioned in this paper presented a comprehensive survey of backdoor learning and defenses based on their characteristics and provided a unified framework for analyzing poisoning-based backdoor attacks, and analyzed the relation between backdoor attacks and relevant fields (i.e., adversarial attacks and data poisoning).
References
More filters
Proceedings ArticleDOI

X-Vectors: Robust DNN Embeddings for Speaker Recognition

TL;DR: This paper uses data augmentation, consisting of added noise and reverberation, as an inexpensive method to multiply the amount of training data and improve robustness of deep neural network embeddings for speaker recognition.
Proceedings ArticleDOI

Deep Neural Network Embeddings for Text-Independent Speaker Verification.

TL;DR: It is found that the embeddings outperform i-vectors for short speech segments and are competitive on long duration test conditions, which are the best results reported for speaker-discriminative neural networks when trained and tested on publicly available corpora.
Journal ArticleDOI

BadNets: Evaluating Backdooring Attacks on Deep Neural Networks

TL;DR: It is shown that the outsourced training introduces new security risks: an adversary can create a maliciously trained network (a backdoored neural network, or a BadNet) that has the state-of-the-art performance on the user's training and validation samples but behaves badly on specific attacker-chosen inputs.
Journal ArticleDOI

Speech database development at MIT: Timit and beyond

TL;DR: The experiences of researchers at MIT in the collection of two large speech databases, timit and voyager, are described, which have somewhat complementary objectives.
Proceedings ArticleDOI

Generalized End-to-End Loss for Speaker Verification

TL;DR: This paper proposed a new loss function called generalized end-to-end (GE2E) loss, which makes the training of speaker verification models more efficient than their previous tuple-based end to end loss function.
Related Papers (5)