scispace - formally typeset
Open AccessJournal ArticleDOI

Security Evaluation of Pattern Classifiers under Attack

TLDR
A framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature, and given examples of its use in three real applications show that security evaluation can provide a more complete understanding of the classifier's behavior in adversarial environments, and lead to better design choices.
Abstract
Pattern classification systems are commonly used in adversarial applications, like biometric authentication, network intrusion detection, and spam filtering, in which data can be purposely manipulated by humans to undermine their operation. As this adversarial scenario is not taken into account by classical design methods, pattern classification systems may exhibit vulnerabilities, whose exploitation may severely affect their performance, and consequently limit their practical utility. Extending pattern classification theory and design methods to adversarial settings is thus a novel and very relevant research direction, which has not yet been pursued in a systematic way. In this paper, we address one of the main open issues: evaluating at design phase the security of pattern classifiers, namely, the performance degradation under potential attacks they may incur during operation. We propose a framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature, and give examples of its use in three real applications. Reported results show that security evaluation can provide a more complete understanding of the classifier's behavior in adversarial environments, and lead to better design choices.

read more

Citations
More filters
Proceedings ArticleDOI

The Limitations of Deep Learning in Adversarial Settings

TL;DR: This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.
Proceedings ArticleDOI

Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks

TL;DR: In this article, the authors introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs, which increases the average minimum number of features that need to be modified to create adversarial examples by about 800%.
Posted Content

The Limitations of Deep Learning in Adversarial Settings

TL;DR: In this paper, the authors formalize the space of adversaries against deep neural networks and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.
Posted Content

Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

TL;DR: New transferability attacks between previously unexplored (substitute, victim) pairs of machine learning model classes, most notably SVMs and decision trees are introduced.
Book ChapterDOI

Evasion Attacks against Machine Learning at Test Time

TL;DR: In this paper, the authors present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks.
References
More filters
Book

An introduction to the bootstrap

TL;DR: This article presents bootstrap methods for estimation, using simple arguments, with Minitab macros for implementing these methods, as well as some examples of how these methods could be used for estimation purposes.
Journal ArticleDOI

Machine learning in automated text categorization

TL;DR: This survey discusses the main approaches to text categorization that fall within the machine learning paradigm and discusses in detail issues pertaining to three different problems, namely, document representation, classifier construction, and classifier evaluation.
Journal ArticleDOI

New Support Vector Algorithms

TL;DR: A new class of support vector algorithms for regression and classification that eliminates one of the other free parameters of the algorithm: the accuracy parameter in the regression case, and the regularization constant C in the classification case.
Journal ArticleDOI

Support vector machines for spam categorization

TL;DR: The use of support vector machines in classifying e-mail as spam or nonspam is studied by comparing it to three other classification algorithms: Ripper, Rocchio, and boosting decision trees, which found SVM's performed best when using binary features.
Proceedings ArticleDOI

Adversarial machine learning

TL;DR: In this article, the authors discuss an emerging field of study: adversarial machine learning (AML), the study of effective machine learning techniques against an adversarial opponent, and give a taxonomy for classifying attacks against online machine learning algorithms.
Related Papers (5)