scispace - formally typeset
A

Amir Rahmati

Researcher at Stony Brook University

Publications -  41
Citations -  3726

Amir Rahmati is an academic researcher from Stony Brook University. The author has contributed to research in topics: Computer science & Overhead (computing). The author has an hindex of 19, co-authored 37 publications receiving 2643 citations. Previous affiliations of Amir Rahmati include University of Michigan & University of Massachusetts Amherst.

Papers
More filters
Proceedings ArticleDOI

Robust Physical-World Attacks on Deep Learning Visual Classification

TL;DR: This work proposes a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints.
Posted Content

Robust Physical-World Attacks on Machine Learning Models.

TL;DR: This paper proposes a new attack algorithm--Robust Physical Perturbations (RP2)-- that generates perturbations by taking images under different conditions into account and can create spatially-constrained perturbation that mimic vandalism or art to reduce the likelihood of detection by a casual observer.
Proceedings ArticleDOI

ContexIoT: Towards Providing Contextual Integrity to Appified IoT Platforms

TL;DR: ContexIoT is proposed, a context-based permission system for appified IoT platforms that provides contextual integrity by supporting fine-grained context identification for sensitive actions, and runtime prompts with rich context information to help users perform effective access control.
Proceedings Article

FlowFence: practical data protection for emerging IoT application frameworks

TL;DR: FlowFence is presented, a system that requires consumers of sensitive data to declare their intended data flow patterns, which it enforces with low overhead, while blocking all other undeclared flows.
Posted Content

Physical Adversarial Examples for Object Detectors

TL;DR: In this article, the authors extend physical attacks to more challenging object detection models, a broader class of deep learning algorithms widely used to detect and label multiple objects within a scene, and demonstrate the transferability of their adversarial perturbations.