scispace - formally typeset
Search or ask a question
Author

Giorgio Giacinto

Other affiliations: University of Calgary
Bio: Giorgio Giacinto is an academic researcher from University of Cagliari. The author has contributed to research in topics: Malware & Relevance feedback. The author has an hindex of 45, co-authored 163 publications receiving 8647 citations. Previous affiliations of Giorgio Giacinto include University of Calgary.


Papers
More filters
Book ChapterDOI
23 Sep 2013
TL;DR: This work presents a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks.
Abstract: In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker's knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.

1,667 citations

Book ChapterDOI
TL;DR: In this paper, the authors present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks.
Abstract: In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker's knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.

937 citations

Journal ArticleDOI
TL;DR: Results on the classification of multisensor remote-sensing images show that an approach to the automatic design of effective neural network ensembles is proposed, aimed to select the subset formed by the most error-independent nets.

432 citations

Journal ArticleDOI
TL;DR: McPAD (multiple classifier payload-based anomaly detector), a new accurate payload- based anomaly detection system that consists of an ensemble of one-class classifiers that is very accurate in detecting network attacks that bear some form of shell-code in the malicious payload.

296 citations

Journal ArticleDOI
TL;DR: The DCS method proposed is based on the concepts of “classifier’s local accuracy” (CLA) and MCB and exploits the concept of MCB for DCS purposes, while the BKS method is aimed at classifier combination.

271 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Book
31 Jul 1997
TL;DR: This book explores the meta-heuristics approach called tabu search, which is dramatically changing the authors' ability to solve a host of problems that stretch over the realms of resource planning, telecommunications, VLSI design, financial analysis, scheduling, spaceplanning, energy distribution, molecular engineering, logistics, pattern classification, flexible manufacturing, waste management,mineral exploration, biomedical analysis, environmental conservation and scores of other problems.
Abstract: From the Publisher: This book explores the meta-heuristics approach called tabu search, which is dramatically changing our ability to solve a hostof problems that stretch over the realms of resource planning,telecommunications, VLSI design, financial analysis, scheduling, spaceplanning, energy distribution, molecular engineering, logistics,pattern classification, flexible manufacturing, waste management,mineral exploration, biomedical analysis, environmental conservationand scores of other problems. The major ideas of tabu search arepresented with examples that show their relevance to multipleapplications. Numerous illustrations and diagrams are used to clarifyprinciples that deserve emphasis, and that have not always been wellunderstood or applied. The book's goal is to provide ''hands-on' knowledge and insight alike, rather than to focus exclusively eitheron computational recipes or on abstract themes. This book is designedto be useful and accessible to researchers and practitioners inmanagement science, industrial engineering, economics, and computerscience. It can appropriately be used as a textbook in a masterscourse or in a doctoral seminar. Because of its emphasis on presentingideas through illustrations and diagrams, and on identifyingassociated practical applications, it can also be used as asupplementary text in upper division undergraduate courses. Finally, there are many more applications of tabu search than canpossibly be covered in a single book, and new ones are emerging everyday. The book's goal is to provide a grounding in the essential ideasof tabu search that will allow readers to create successfulapplications of their own. Along with the essentialideas,understanding of advanced issues is provided, enabling researchers togo beyond today's developments and create the methods of tomorrow.

6,373 citations

Posted Content
TL;DR: This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Abstract: Recent work has demonstrated that deep neural networks are vulnerable to adversarial examples---inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models. Code and pre-trained models are available at this https URL and this https URL.

5,789 citations

Book ChapterDOI
08 Jul 2016
TL;DR: It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples.
Abstract: Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.

3,776 citations

Proceedings ArticleDOI
21 Mar 2016
TL;DR: This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.
Abstract: Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97% adversarial success rate while only modifying on average 4.02% of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification.

3,114 citations