scispace - formally typeset
A

Alhussein Fawzi

Researcher at École Polytechnique Fédérale de Lausanne

Publications -  50
Citations -  10926

Alhussein Fawzi is an academic researcher from École Polytechnique Fédérale de Lausanne. The author has contributed to research in topics: Robustness (computer science) & Decision boundary. The author has an hindex of 24, co-authored 49 publications receiving 8007 citations. Previous affiliations of Alhussein Fawzi include University of California, Los Angeles & École Normale Supérieure.

Papers
More filters
Proceedings ArticleDOI

DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks

TL;DR: DeepFool as discussed by the authors proposes the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers by making them more robust.
Proceedings ArticleDOI

Universal Adversarial Perturbations

TL;DR: The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers and outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.
Posted Content

Universal adversarial perturbations

TL;DR: In this paper, the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability was shown.
Proceedings Article

Robustness of classifiers: from adversarial to random noise

TL;DR: In this article, the robustness of nonlinear classifiers to random and semi-random perturbations of the data has been studied, and the curvature of the classifier's decision boundary has been analyzed.
Journal ArticleDOI

Analysis of classifiers’ robustness to adversarial perturbations

TL;DR: In this article, the authors provide a theoretical framework for analyzing the robustness of classifiers to adversarial perturbations, and show fundamental upper bounds on the adversarial robustness.