scispace - formally typeset
S

Somesh Jha

Researcher at University of Wisconsin-Madison

Publications -  363
Citations -  36185

Somesh Jha is an academic researcher from University of Wisconsin-Madison. The author has contributed to research in topics: Computer science & Model checking. The author has an hindex of 76, co-authored 328 publications receiving 29859 citations. Previous affiliations of Somesh Jha include University of Wisconsin–Milwaukee & University of Stuttgart.

Papers
More filters
Proceedings ArticleDOI

The Limitations of Deep Learning in Adversarial Settings

TL;DR: This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.
Proceedings ArticleDOI

Practical Black-Box Attacks against Machine Learning

TL;DR: This work introduces the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge, and finds that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.
Proceedings ArticleDOI

Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures

TL;DR: A new class of model inversion attack is developed that exploits confidence values revealed along with predictions and is able to estimate whether a respondent in a lifestyle survey admitted to cheating on their significant other and recover recognizable images of people's faces given only their name.
Proceedings ArticleDOI

Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks

TL;DR: In this article, the authors introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs, which increases the average minimum number of features that need to be modified to create adversarial examples by about 800%.
Posted Content

The Limitations of Deep Learning in Adversarial Settings

TL;DR: In this paper, the authors formalize the space of adversaries against deep neural networks and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.