scispace - formally typeset
Search or ask a question

Showing papers by "Oliver Kosut published in 2018"


Journal ArticleDOI
TL;DR: A multiple linear regression model is developed to learn the relationship between the external network and the attack subnetwork from historical data to overcome the limited information in the attack model with limited information.
Abstract: This paper studies physical consequences of unobservable false data injection (FDI) attacks designed only with information inside a subnetwork of the power system. The goal of this attack is to overload a chosen target line without being detected via measurements. To overcome the limited information, a multiple linear regression model is developed to learn the relationship between the external network and the attack subnetwork from historical data. The worst possible consequences of such FDI attacks are evaluated by solving a bi-level optimization problem wherein the first level models the limited attack resources, while the second level formulates the system response to such attacks via dc optimal power flow (OPF). The attack model with limited information is reflected in the dc OPF formulation that only takes into account the system information for the attack subnetwork. The vulnerability of this attack model is illustrated on the IEEE 24-bus reliability test system and the IEEE 118-bus systems.

62 citations


Proceedings ArticleDOI
17 Jun 2018
TL;DR: In this paper, a tunable measure for information leakage called maximal a-leakage is introduced, which quantifies the maximal gain of an adversary in refining a tilted version of its prior belief of any (potentially random) function of a dataset conditioning on a disclosed dataset.
Abstract: A tunable measure for information leakage called maximal a-leakage is introduced. This measure quantifies the maximal gain of an adversary in refining a tilted version of its prior belief of any (potentially random) function of a dataset conditioning on a disclosed dataset. The choice of $\alpha$ determines the specific adversarial action ranging from refining a belief for $\alpha=1$ to guessing the best posterior for $\alpha=\infty$ , and for these extremal values this measure simplifies to mutual information (MI) and maximal leakage (MaxL), respectively. For all other $\alpha$ this measure is shown to be the Arimoto channel capacity. Several properties of this measure are proven including: (i) quasi-convexity in the mapping between the original and disclosed datasets; (ii) data processing inequalities; and (iii) a composition property. A full version of this paper is in [1].

40 citations


Posted Content
TL;DR: A tunable measure for information leakage called maximal a-leakage is introduced that quantifies the maximal gain of an adversary in refining a tilted version of its prior belief of any (potentially random) function of a dataset conditioning on a disclosed dataset.
Abstract: A tunable measure for information leakage called \textit{maximal $\alpha$-leakage} is introduced. This measure quantifies the maximal gain of an adversary in refining a tilted version of its prior belief of any (potentially random) function of a dataset conditioning on a disclosed dataset. The choice of $\alpha$ determines the specific adversarial action ranging from refining a belief for $\alpha =1$ to guessing the best posterior for $\alpha = \infty$, and for these extremal values this measure simplifies to mutual information (MI) and maximal leakage (MaxL), respectively. For all other $\alpha$ this measure is shown to be the Arimoto channel capacity. Several properties of this measure are proven including: (i) quasi-convexity in the mapping between the original and disclosed datasets; (ii) data processing inequalities; and (iii) a composition property.

39 citations


Proceedings ArticleDOI
01 Nov 2018
TL;DR: Keyless authentication is considered in an adversarial point-to-point channel where the adversary is assumed to know the code but not the message, and the authentication capacity is shown to be either zero or equal to the no-adversary capacity, depending on whether the channel satisfies a condition termed overwritability.
Abstract: Keyless authentication is considered in an adversarial point-to-point channel. Namely, a legitimate transmitter and receiver aim to communicate over a noisy channel that may or may not also contain an active adversary, capable of transmitting an arbitrary signal into the channel. If the adversary is not present, then the receiver must successfully decode the message with high probability; if it is present, then the receiver must either decode the message or detect the adversary’s presence. Thus, whenever the receiver decodes, it can be certain that the decoded message is authentic. The exact authentication capacity is characterized for discrete-memoryless adversary channels, where the adversary is assumed to know the code but not the message. The authentication capacity is shown to be either zero or equal to the no-adversary capacity, depending on whether the channel satisfies a condition termed overwritability.

27 citations


Proceedings ArticleDOI
21 Dec 2018
TL;DR: Three detection techniques are presented against a wide class of cyber-attacks that maliciously redistribute loads by modifying measurements, with the nearest neighbor algorithm being the most computational efficient.
Abstract: Three detection techniques are presented against a wide class of cyber-attacks that maliciously redistribute loads by modifying measurements. The detectors use different anomaly detection algorithms based on machine learning techniques: nearest neighbor method, support vector machines, and replicator neural networks. The detectors are tested using a data-driven approach on a realistic dataset comprised of real historical load data in the form of publicly available PJM zonal data mapped to the IEEE 30-bus system. The results show all three detectors to be very accurate, with the nearest neighbor algorithm being the most computational efficient.

16 citations


Proceedings ArticleDOI
17 Jun 2018
TL;DR: The achievability proof uses a novel variant of the Csiszar-Narayan method for the arbitrarily-varying channel and shows that with enough power, an adversary can confuse the decoder by transmitting a superposition of several codewords while satisfying its power constraint with positive probability.
Abstract: This paper considers list-decoding for the Gaussian arbitrarily-varying channel under the average probability of error criterion, where both the legitimate transmission and the state (or adversarial signal) are power limited. For list size L, the capacity is equivalent to the capacity of a standard Gaussian with the noise power raised by the adversary power, if the ratio of the adversary power to the transmitter power is less than L; otherwise, the capacity is zero. The converse proof involves showing that with enough power, an adversary can confuse the decoder by transmitting a superposition of several codewords while satisfying its power constraint with positive probability. The achievability proof uses a novel variant of the Csiszar-Narayan method for the arbitrarily-varying channel.

13 citations


Proceedings ArticleDOI
01 Nov 2018
TL;DR: In this paper, the authors study the problem of data disclosure with privacy guarantees, wherein the utility of the disclosed data is ensured via a hard distortion constraint, and show that both the optimal mechanism and the optimal tradeoff are invariant for any ε > 0.
Abstract: We study the problem of data disclosure with privacy guarantees, wherein the utility of the disclosed data is ensured via a hard distortion constraint. Unlike average distortion, hard distortion provides a deterministic guarantee of fidelity. For the privacy measure, we use a tunable information leakage measure, namely maximal $\alpha$- leakage $(\alpha \in [1, \infty$ and formulate the privacy-utility tradeoff problem. The resulting solution highlights that under a hard distortion constraint, the nature of the solution remains unchanged for both local and non-local privacy requirements. More precisely, we show that both the optimal mechanism and the optimal tradeoff are invariant for any $\alpha \gt 1$; i.e., the tunable leakage measure only behaves as either of the two extrema, i.e., mutual information for $\alpha=1$ and maximal leakage for $\alpha=\infty$.

13 citations


Posted Content
TL;DR: This measure quantifies the maximal gain of an adversary in inferring any (potentially random) function of a dataset from a release of the data and study the problem of data publishing with privacy guarantees.
Abstract: We introduce a tunable measure for information leakage called maximal alpha-leakage. This measure quantifies the maximal gain of an adversary in inferring any (potentially random) function of a dataset from a release of the data. The inferential capability of the adversary is, in turn, quantified by a class of adversarial loss functions that we introduce as $\alpha$-loss, $\alpha\in[1,\infty]$. The choice of $\alpha$ determines the specific adversarial action and ranges from refining a belief (about any function of the data) for $\alpha=1$ to guessing the most likely value for $\alpha=\infty$ while refining the $\alpha^{th}$ moment of the belief for $\alpha$ in between. Maximal alpha-leakage then quantifies the adversarial gain under $\alpha$-loss over all possible functions of the data. In particular, for the extremal values of $\alpha=1$ and $\alpha=\infty$, maximal alpha-leakage simplifies to mutual information and maximal leakage, respectively. For $\alpha\in(1,\infty)$ this measure is shown to be the Arimoto channel capacity of order $\alpha$. We show that maximal alpha-leakage satisfies data processing inequalities and a sub-additivity property thereby allowing for a weak composition result. Building upon these properties, we use maximal alpha-leakage as the privacy measure and study the problem of data publishing with privacy guarantees, wherein the utility of the released data is ensured via a hard distortion constraint. Unlike average distortion, hard distortion provides a deterministic guarantee of fidelity. We show that under a hard distortion constraint, for $\alpha>1$ the optimal mechanism is independent of $\alpha$, and therefore, the resulting optimal tradeoff is the same for all values of $\alpha>1$. Finally, the tunability of maximal alpha-leakage as a privacy measure is also illustrated for binary data with average Hamming distortion as the utility measure.

12 citations


Posted Content
TL;DR: It is shown that both the optimal mechanism and the optimal tradeoff are invariant for any $\alpha \gt 1$; i.e., the tunable leakage measure only behaves as either of the two extrema, i.
Abstract: We study the problem of data disclosure with privacy guarantees, wherein the utility of the disclosed data is ensured via a \emph{hard distortion} constraint. Unlike average distortion, hard distortion provides a deterministic guarantee of fidelity. For the privacy measure, we use a tunable information leakage measure, namely \textit{maximal $\alpha$-leakage} ($\alpha\in[1,\infty]$), and formulate the privacy-utility tradeoff problem. The resulting solution highlights that under a hard distortion constraint, the nature of the solution remains unchanged for both local and non-local privacy requirements. More precisely, we show that both the optimal mechanism and the optimal tradeoff are invariant for any $\alpha>1$; i.e., the tunable leakage measure only behaves as either of the two extrema, i.e., mutual information for $\alpha=1$ and maximal leakage for $\alpha=\infty$.

11 citations


Proceedings ArticleDOI
24 Dec 2018
TL;DR: In this paper, an enhanced bad data detector (BDD) utilizing the effect of zero injection buses is proposed, and a class of multiplicative FDI attacks that maintain the rank of the PMU measurement matrix is introduced.
Abstract: This paper studies false data injection (FDI) attacks against phasor measurement units (PMUs). As compared to the conventional bad data detector (BDD), an enhanced BDD utilizing the effect of zero injection buses is proposed. Feasible conditions under which FDI attacks are unobservable to this enhanced BDD are discussed. In addition, a class of multiplicative FDI attacks that maintain the rank of the PMU measurement matrix is introduced. Simulation results on the IEEE RTS-24-bus system indicate that the these multiplicative unobservable attacks can avoid detection by both the enhanced BDD and a detector based on low-rank decomposition proposed in prior work.

7 citations


Posted Content
TL;DR: Simulation results on the IEEE RTS-24-bus system indicate that the class of multiplicative FDI attacks that maintain the rank of the PMU measurement matrix is introduced can avoid detection by both the enhanced BDD and a detector based on low-rank decomposition proposed in prior work.
Abstract: This paper studies false data injection (FDI) attacks against phasor measurement units (PMUs). As compared to the conventional bad data detector (BDD), an enhanced BDD utilizing the effect of zero injection buses is proposed. Feasible conditions under which FDI attacks are unobservable to this enhanced BDD are discussed. In addition, a class of multiplicative FDI attacks that maintain the rank of the PMU measurement matrix is introduced. Simulation results on the IEEE RTS-24-bus system indicate that the these multiplicative unobservable attacks can avoid detection by both the enhanced BDD and a detector based on low-rank decomposition proposed in prior work.

Proceedings ArticleDOI
15 Aug 2018
TL;DR: In this article, finite blocklength and second-order (dispersion) bounds for the arbitrarily-varying channel (AVC) with shared randomness were derived.
Abstract: Finite blocklength and second-order (dispersion) results are presented for the arbitrarily-varying channel (AVC), a classical model wherein an adversary can transmit arbitrary signals into the channel. A novel finite blocklength achievability bound is presented, roughly analogous to the random coding union bound for non-adversarial channels. This finite blocklength bound, along with a known converse bound, is used to derive bounds on the dispersion of discrete memoryless AVCs without shared randomness, and with cost constraints on the input and the state. These bounds are tight for many channels of interest, including the binary symmetric AVC. However, the bounds are not tight if the deterministic and random code capacities differ.

Journal ArticleDOI
TL;DR: Focusing on the special case of the erasure distortion measure, a code design based on the polytope codes of Kosut et al is introduced, which is applied to a separate problem in distributed storage.
Abstract: We consider a problem in which a source is encoded into $N$ packets, an unknown number of which are subject to adversarial errors en route to the decoder. We seek code designs for which the decoder is guaranteed to be able to reproduce the source subject to a certain distortion constraint when there are no packets errors, subject to a less stringent distortion constraint when there is one error, and so on. Focusing on the special case of the erasure distortion measure, we introduce a code design based on the polytope codes of Kosut et al. . The resulting designs are also applied to a separate problem in distributed storage.

Posted Content
TL;DR: A novel finite blocklength achievability bound is presented, roughly analogous to the random coding union bound for non-adversarial channels, used to derive bounds on the dispersion of discrete memoryless AVCs without shared randomness, and with cost constraints on the input and the state.
Abstract: Finite blocklength and second-order (dispersion) results are presented for the arbitrarily-varying channel (AVC), a classical model wherein an adversary can transmit arbitrary signals into the channel. A novel finite blocklength achievability bound is presented, roughly analogous to the random coding union bound for non-adversarial channels. This finite blocklength bound, along with a known converse bound, is used to derive bounds on the dispersion of discrete memoryless AVCs without shared randomness, and with cost constraints on the input and the state. These bounds are tight for many channels of interest, including the binary symmetric AVC. However, the bounds are not tight if the deterministic and random code capacities differ.