scispace - formally typeset
Search or ask a question
Author

Hamza Fawzi

Bio: Hamza Fawzi is an academic researcher from University of Cambridge. The author has contributed to research in topics: Semidefinite programming & Positive-definite matrix. The author has an hindex of 17, co-authored 59 publications receiving 1997 citations. Previous affiliations of Hamza Fawzi include University of California, Los Angeles & Massachusetts Institute of Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: A new simple characterization of the maximum number of attacks that can be detected and corrected as a function of the pair (A,C) of the system is given and it is shown that it is impossible to accurately reconstruct the state of a system if more than half the sensors are attacked.
Abstract: The vast majority of today's critical infrastructure is supported by numerous feedback control loops and an attack on these control loops can have disastrous consequences. This is a major concern since modern control systems are becoming large and decentralized and thus more vulnerable to attacks. This paper is concerned with the estimation and control of linear systems when some of the sensors or actuators are corrupted by an attacker. We give a new simple characterization of the maximum number of attacks that can be detected and corrected as a function of the pair $(A,C)$ of the system and we show in particular that it is impossible to accurately reconstruct the state of a system if more than half the sensors are attacked. In addition, we show how the design of a secure local control loop can improve the resilience of the system. When the number of attacks is smaller than a threshold, we propose an efficient algorithm inspired from techniques in compressed sensing to estimate the state of the plant despite attacks. We give a theoretical characterization of the performance of this algorithm and we show on numerical simulations that the method is promising and allows to reconstruct the state accurately despite attacks. Finally, we consider the problem of designing output-feedback controllers that stabilize the system despite sensor attacks. We show that a principle of separation between estimation and control holds and that the design of resilient output feedback controllers can be reduced to the design of resilient state estimators.

1,199 citations

Proceedings Article
23 Feb 2018
TL;DR: This paper derives fundamental upper bounds on the robustness to perturbation of any classification function, and proves the existence of adversarial perturbations that transfer well across different classifiers with small risk.
Abstract: Despite achieving impressive performance, state-of-the-art classifiers remain highly vulnerable to small, imperceptible, adversarial perturbations. This vulnerability has proven empirically to be very intricate to address. In this paper, we study the phenomenon of adversarial perturbations under the assumption that the data is generated with a smooth generative model. We derive fundamental upper bounds on the robustness to perturbations of any classification function, and prove the existence of adversarial perturbations that transfer well across different classifiers with small risk. Our analysis of the robustness also provides insights onto key properties of generative models, such as their smoothness and dimensionality of latent space. We conclude with numerical experimental results showing that our bounds provide informative baselines to the maximal achievable robustness on several datasets.

176 citations

Proceedings ArticleDOI
01 Sep 2011
TL;DR: This work describes the number of attacked sensors that can be tolerated so that the state of the system can still be correctly recovered by any decoding algorithm, and proposes a specific computationally feasible decoding algorithm that allows to correct a large number of errors.
Abstract: We consider the problem of state-estimation of a linear dynamical system when some of the sensor measurements are corrupted by an adversarial attacker. The errors injected by the attacker in the sensor measurements can be arbitrary and are not assumed to follow a specific model (in particular they can be of arbitrary magnitude). We first characterize the number of attacked sensors that can be tolerated so that the state of the system can still be correctly recovered by any decoding algorithm. We then propose a specific computationally feasible decoding algorithm and we give a characterization of the number of errors this decoder can correct. For this we use ideas from compressed sensing and error correction over the reals and we exploit the dynamical nature of the problem. We show using numerical simulations that this decoder performs very well in practice and allows to correct a large number of errors.

147 citations

Journal ArticleDOI
TL;DR: The positive semidefinite rank (psd rank) as discussed by the authors is the smallest integer k for which there exist polyhedra of size k = 1 such that the polyhedron is polyhedrically connected with the rank of k. The psd rank has many appealing geometric interpretations, including semidefinite representations of polyhedras and information-theoretic applications.
Abstract: Let $$M \in \mathbb {R}^{p \times q}$$M?Rp×q be a nonnegative matrix. The positive semidefinite rank (psd rank) of M is the smallest integer k for which there exist positive semidefinite matrices $$A_i, B_j$$Ai,Bj of size $$k \times k$$k×k such that $$M_{ij} = {{\mathrm{trace}}}(A_i B_j)$$Mij=trace(AiBj). The psd rank has many appealing geometric interpretations, including semidefinite representations of polyhedra and information-theoretic applications. In this paper we develop and survey the main mathematical properties of psd rank, including its geometry, relationships with other rank notions, and computational and algorithmic aspects.

107 citations

Proceedings ArticleDOI
01 Dec 2012
TL;DR: It is shown that it is possible to increase the resilience of the system to attacks by changing the dynamics of theSystem using state-feedback while having (almost) total freedom in placing the new poles of the System.
Abstract: We consider the problem of estimation and control of a linear system when some of the sensors or actuators are attacked by a malicious agent. In our previous work [1] we studied systems with no control inputs and we formulated the estimation problem as a dynamic error correction problem with sparse attack vectors. In this paper we extend our study and look at the role of inputs and control. We first show that it is possible to increase the resilience of the system to attacks by changing the dynamics of the system using state-feedback while having (almost) total freedom in placing the new poles of the system. We then look at the problem of stabilizing a plant using output-feedback despite attacks on sensors, and we show that a principle of separation of estimation and control holds. Finally we look at the effect of attacks on actuators in addition to attacks on sensors: we characterize the resilience of the system with respect to actuator and sensor attacks and we formulate an efficient optimization-based decoder to estimate the state of the system despite attacks on actuators and sensors.

67 citations


Cited by
More filters
Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
TL;DR: In this article, a mathematical framework for cyber-physical systems, attacks, and monitors is proposed, and fundamental monitoring limitations from both system-theoretic and graph-based perspectives are characterized.
Abstract: Cyber-physical systems are ubiquitous in power systems, transportation networks, industrial control processes, and critical infrastructures. These systems need to operate reliably in the face of unforeseen failures and external malicious attacks. In this paper: (i) we propose a mathematical framework for cyber-physical systems, attacks, and monitors; (ii) we characterize fundamental monitoring limitations from system-theoretic and graph-theoretic perspectives; and (ii) we design centralized and distributed attack detection and identification monitors. Finally, we validate our findings through compelling examples.

1,430 citations

Journal ArticleDOI
TL;DR: In this paper, the authors review recent findings on adversarial examples for DNNs, summarize the methods for generating adversarial samples, and propose a taxonomy of these methods.
Abstract: With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks (DNNs) have been recently found vulnerable to well-designed input samples called adversarial examples . Adversarial perturbations are imperceptible to human but can easily fool DNNs in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying DNNs in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for DNNs, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.

1,203 citations