scispace - formally typeset
Open AccessPosted Content

Visual Privacy Protection via Mapping Distortion

Reads0
Chats0
TLDR
In the modified dataset generated by MDP, the image and its label are not consistent, whereas the DNNs trained on it can still achieve good performance on benign testing set, and this method can protect privacy when the dataset is leaked.
Abstract
Privacy protection is an important research area, which is especially critical in this big data era. To a large extent, the privacy of visual classification data is mainly in the mapping between the image and its corresponding label, since this relation provides a great amount of information and can be used in other scenarios. In this paper, we propose the mapping distortion based protection (MDP) and its augmentation-based extension (AugMDP) to protect the data privacy by modifying the original dataset. In the modified dataset generated by MDP, the image and its label are not consistent ($e.g.$, a cat-like image is labeled as the dog), whereas the DNNs trained on it can still achieve good performance on benign testing set. As such, this method can protect privacy when the dataset is leaked. Extensive experiments are conducted, which verify the effectiveness and feasibility of our method. The code for reproducing main results is available at \url{this https URL}.

read more

Citations
More filters
Journal ArticleDOI

ADGAN: Protect Your Location Privacy in Camera Data of Auto-Driving Vehicles

TL;DR: The goal of this article is to protect individuals’ location privacy by hiding side-channel information in the captured data while preserving the data utility for downstream applications and validate the superiority of the proposed models over the state of the arts in utility preservation and privacy protection for autonomous vehicles’ images and videos.
Proceedings ArticleDOI

Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection

TL;DR: This work explores the untargeted backdoor watermarking scheme, where the abnormal model behaviors are not deterministic and is designed under both poisoned-label and clean-label settings.
Journal ArticleDOI

Black-Box Dataset Ownership Verification via Backdoor Watermarking

TL;DR: Li et al. as mentioned in this paper proposed to embed external patterns via backdoor watermarking for the ownership verification to protect the released datasets, which contains two main parts, including dataset water-marking and dataset verification.
Journal ArticleDOI

Black-box Ownership Verification for Dataset Protection via Backdoor Watermarking

TL;DR: This paper forms the protection of released datasets as verifying whether they are adopted for training a (suspicious) third-party model, where defenders can only query the model while having no information about its parameters and training details.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Proceedings ArticleDOI

Densely Connected Convolutional Networks

TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Proceedings ArticleDOI

You Only Look Once: Unified, Real-Time Object Detection

TL;DR: Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background, and outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.
Dissertation

Learning Multiple Layers of Features from Tiny Images

TL;DR: In this paper, the authors describe how to train a multi-layer generative model of natural images, using a dataset of millions of tiny colour images, described in the next section.
Related Papers (5)