scispace - formally typeset
Proceedings ArticleDOI

Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations

Reads0
Chats0
TLDR
A UAP fingerprinting method for DNN models is proposed and an encoder via contrastive learning that takes fingerprints as inputs, outputs a similarity score is trained that has good generalizability across different model architectures and is robust against post-modifications on stolen models.
Abstract
In this paper, we propose a novel and practical mechanism to enable the service provider to verify whether a suspect model is stolen from the victim model via model extraction attacks. Our key insight is that the profile of a DNN model's decision boundary can be uniquely characterized by its Universal Adversarial Perturbations (UAPs). UAPs belong to a low-dimensional subspace and piracy models' subspaces are more consistent with victim model's subspace compared with non-piracy model. Based on this, we propose a UAP fingerprinting method for DNN models and train an encoder via contrastive learning that takes fingerprints as inputs, outputs a similarity score. Extensive studies show that our framework can detect model Intellectual Property (IP) breaches with confidence > 99.99 % within only 20 fingerprints of the suspect model. It also has good generalizability across different model architectures and is robust against post-modifications on stolen models.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

FBI: Fingerprinting models with Benign Inputs

TL;DR: This paper leverages an information-theoretic scheme for the identification task and devise a greedy discrimination algorithm for the detection task, which is experimentally validated over an unprecedented set of more than 1,000 networks.
Proceedings ArticleDOI

Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks

TL;DR: This work considers the pairwise relationship between samples instead and proposes a novel yet simple model stealing detection method based on SAmple Correlation (SAC), which detects the stolen models with the best performance in terms of AUC across different datasets and model architectures.
Journal ArticleDOI

Securing Deep Generative Models with Universal Adversarial Signature

TL;DR: Zeng et al. as discussed by the authors proposed to inject a universal adversarial signature into an arbitrary pre-trained generative model, in order to make its generated contents more detectable and traceable.
Journal ArticleDOI

Free Fine-tuning: A Plug-and-Play Watermarking Scheme for Deep Neural Networks

TL;DR: A plug-and-play watermarking scheme for DNN models by injecting an independent proprietary model into the target model to serve the watermark embedding and ownership verification and is scaleable to challenging datasets, large production-level models, and diverse tasks.
Proceedings ArticleDOI

RAI2: Responsible Identity Audit Governing the Artificial Intelligence

TL;DR: In this paper , the authors propose a method to solve the problem of "pose severe" problems. But it is difficult to implement it, and time-consuming, and computationally expensive.
References
More filters
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Posted Content

Deep Residual Learning for Image Recognition

TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings ArticleDOI

Densely Connected Convolutional Networks

TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Posted Content

A Simple Framework for Contrastive Learning of Visual Representations

TL;DR: It is shown that composition of data augmentations plays a critical role in defining effective predictive tasks, and introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning.