scispace - formally typeset
S

Saeed Mahloujifar

Researcher at Princeton University

Publications -  58
Citations -  610

Saeed Mahloujifar is an academic researcher from Princeton University. The author has contributed to research in topics: Computer science & Robustness (computer science). The author has an hindex of 11, co-authored 41 publications receiving 416 citations. Previous affiliations of Saeed Mahloujifar include University of Virginia.

Papers
More filters
Journal ArticleDOI

The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure

TL;DR: In this article, the authors investigate the adversarial risk and robustness of classifiers and draw a connection to the well-known phenomenon of "concentration of measure" in metric measure spaces.
Posted Content

The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure

TL;DR: In this paper, the authors investigated the adversarial risk and robustness of classifiers and drew a connection to the well-known phenomenon of concentration of measure in metric measure spaces, and showed that if the metric probability space of the test instance is concentrated, any classifier with some initial constant error is inherently vulnerable to adversarial perturbations.
Proceedings Article

Can Adversarially Robust Learning LeverageComputational Hardness

TL;DR: In this paper, it was shown that for any learning algorithm with sample complexity $m$ and any efficiently computable "predicate" defining some "bad" property $B$ for the produced hypothesis (e.g., failing on a particular test) that happens with an initial constant probability, there exist polynomial-time online poisoning attacks that tamper with $O (sqrt m)$ many examples, replace them with other correctly labeled examples, and increase the probability of the bad event$B$ to approximately 1$.
Posted Content

An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?

TL;DR: A reconstruction attack on InstaHide is presented that is able to use the encoded images to recover visually recognizable versions of the original images and proves barriers against achieving privacy through any learning protocol that uses instance encoding.
Book ChapterDOI

Blockwise p -Tampering Attacks on Cryptographic Primitives, Extractors, and Learners

TL;DR: The work of Austrin et al. showed how to break certain ‘privacy primitives’ (e.g., encryption, commitments, etc.) through bitwise p-tampering through giving a bitwisep-tammy biasing attack for increasing the average of any efficient function by \(\varOmega (p \cdot {\text {Var}}[f(U_n)])