scispace - formally typeset
Search or ask a question

Showing papers by "Oliver Kosut published in 2020"


Proceedings ArticleDOI
16 Jan 2020
TL;DR: The optimal differential privacy parameters of a mechanism that satisfies a given level of Renyi´ differential privacy (RDP) are derived based on the joint range of two f-divergences that underlie the approximate and the Renyi variations of differential privacy.
Abstract: We derive the optimal differential privacy (DP) parameters of a mechanism that satisfies a given level of Renyi´ differential privacy (RDP). Our result is based on the joint range of two f-divergences that underlie the approximate and the Renyi variations of differential privacy. We apply our result to´ the moments accountant framework for characterizing privacy guarantees of stochastic gradient descent. When compared to the state-of-the-art, our bounds may lead to about 100 more stochastic gradient descent iterations for training deep learning models for the same privacy budget.

21 citations


Posted Content
TL;DR: The first part of this work develops a machinery for optimally relating approximate DP to RDP based on the joint range of two $f$ -divergences that underlie the approximate DP and RDP, and establishes a relationship between RDP and hypothesis test DP.
Abstract: We consider three different variants of differential privacy (DP), namely approximate DP, Renyi DP (RDP), and hypothesis test DP. In the first part, we develop a machinery for optimally relating approximate DP to RDP based on the joint range of two $f$-divergences that underlie the approximate DP and RDP. In particular, this enables us to derive the optimal approximate DP parameters of a mechanism that satisfies a given level of RDP. As an application, we apply our result to the moments accountant framework for characterizing privacy guarantees of noisy stochastic gradient descent (SGD). When compared to the state-of-the-art, our bounds may lead to about 100 more stochastic gradient descent iterations for training deep learning models for the same privacy budget. In the second part, we establish a relationship between RDP and hypothesis test DP which allows us to translate the RDP constraint into a tradeoff between type I and type II error probabilities of a certain binary hypothesis test. We then demonstrate that for noisy SGD our result leads to tighter privacy guarantees compared to the recently proposed $f$-DP framework for some range of parameters.

21 citations


Posted Content
TL;DR: In this article, the optimal differential privacy parameters of a mechanism that satisfies a given level of Renyi differential privacy were derived based on the joint range of two $f$-divergences that underlie the approximate and Renyi variations of differential privacy.
Abstract: We derive the optimal differential privacy (DP) parameters of a mechanism that satisfies a given level of Renyi differential privacy (RDP). Our result is based on the joint range of two $f$-divergences that underlie the approximate and the Renyi variations of differential privacy. We apply our result to the moments accountant framework for characterizing privacy guarantees of stochastic gradient descent. When compared to the state-of-the-art, our bounds may lead to about 100 more stochastic gradient descent iterations for training deep learning models for the same privacy budget.

12 citations


Posted Content
TL;DR: The gap between the best achievable rates and the asymptotic capacity region as a function of blocklength is shown to hold for any channel satisfying certain regularity conditions, which includes all discrete-memoryless channels and the Gaussian multiple-access channel.
Abstract: A new converse bound is presented for the two-user multiple-access channel under the average probability of error constraint. This bound shows that for most channels of interest, the second-order coding rate---that is, the difference between the best achievable rates and the asymptotic capacity region as a function of blocklength $n$ with fixed probability of error---is $O(1/\sqrt{n})$ bits per channel use. The principal tool behind this converse proof is a new measure of dependence between two random variables called wringing dependence, as it is inspired by Ahlswede's wringing technique. The $O(1/\sqrt{n})$ gap is shown to hold for any channel satisfying certain regularity conditions, which includes all discrete-memoryless channels and the Gaussian multiple-access channel. Exact upper bounds as a function of the probability of error are proved for the coefficient in the $O(1/\sqrt{n})$ term, although for most channels they do not match existing achievable bounds.

7 citations


Proceedings ArticleDOI
01 Jun 2020
TL;DR: This paper considers a standard definition of authentication, and defines γ-correcting authentication, where it is required that at least a γ fraction of the users’ messages be correctable, even in the presence of an adversary.
Abstract: In this paper, we present results on the authentication capacity region for the two-user arbitrarily-varying multipleaccess channel. We first consider a standard definition of authentication, in which the receiver may discard both messages if an adversary is detected. For this setting, we show that an extension of the arbitrarily-varying channel condition overwritability characterizes the authentication capacity region. We then define γ-correcting authentication, where we require that at least a γ fraction of the users’ messages be correctable, even in the presence of an adversary. We give necessary conditions for the γ-correcting authentication capacity region to have nonempty interior, and show that positive rate pairs are achievable over a particular channel that satisfies these conditions.

6 citations


Proceedings ArticleDOI
21 Jun 2020
TL;DR: It is shown that even if the authentication capacity with a deterministic encoder and an essentially omniscient adversary is zero, allowing a stochastic encoder can result in a positive authentication capacity.
Abstract: In unsecured communications settings, ascertaining the trustworthiness of received information, called authentication, is paramount. We consider keyless authentication over an arbitrarily-varying channel, where channel states are chosen by a malicious adversary with access to noisy versions of transmitted sequences. We have shown previously that a channel condition termed U-overwritability is a sufficient condition for zero authentication capacity over such a channel, and also that with a deterministic encoder, a sufficiently clear-eyed adversary is essentially omniscient. In this paper, we show that even if the authentication capacity with a deterministic encoder and an essentially omniscient adversary is zero, allowing a stochastic encoder can result in a positive authentication capacity. Furthermore, the authentication capacity with a stochastic encoder can be equal to the no-adversary capacity of the underlying channel in this case. We illustrate this for a binary channel model, which provides insight into the more general case.

6 citations


Journal ArticleDOI
01 Oct 2020
TL;DR: In this article, a machine learning-based detection framework is proposed to detect a class of cyber-attacks that redistribute loads by modifying measurements, which is called load redistribution attacks.
Abstract: A machine learning-based detection framework is proposed to detect a class of cyber-attacks that redistribute loads by modifying measurements. The detection framework consists of a multi-output support vector regression (SVR) load predictor and a subsequent support vector machine (SVM) attack detector to determine the existence of load redistribution (LR) attacks utilising loads predicted by the SVR predictor. Historical load data for training the SVR are obtained from the publicly available PJM zonal loads and are mapped to the IEEE 30-bus system. The features to predict loads are carefully extracted from the historical load data capturing both temporal and spatial correlations. The SVM attack detector is trained using normal data and randomly created LR attacks so that it can maximally explore the attack space. An algorithm to create random LR attacks is introduced. The results show that the SVM detector trained merely using random attacks can effectively detect not only random attacks but also intelligently designed attacks. Moreover, using the SVR predicted loads to re-dispatch generation when attacks are detected can significantly mitigate the attack consequences.

5 citations


Proceedings ArticleDOI
01 Jun 2020
TL;DR: Maximal α-leakage is a tunable measure of information leakage based on the quality of an adversary’s belief about an arbitrary function of private data based on public data.
Abstract: Maximal α-leakage is a tunable measure of information leakage based on the quality of an adversary’s belief about an arbitrary function of private data based on public data. The parameter α determines the loss function used to measure the quality of a belief, ranging from log-loss at $\alpha=1$ to the probability of error at $\alpha=\infty$. We review its definition and main properties, including extensions to $\alpha\lt 1$, robustness to side information, and relationship to Renyi differential privacy.

5 citations


Posted Content
14 Mar 2020
TL;DR: The results show that the proposed detection framework can effectively detect LR attacks and attack mitigation can be achieved by using the SVR predicted loads to re-dispatch generations.
Abstract: A machine learning-based detection framework is proposed to detect a class of cyber-attacks that redistribute loads by modifying measurements. The detection framework consists of a multi-output support vector regression (SVR) load predictor that predicts loads by exploiting both spatial and temporal correlations, and a subsequent support vector machine (SVM) attack detector to determine the existence of load redistribution (LR) attacks utilizing loads predicted by the SVR predictor. Historical load data for training the SVR are obtained from the publicly available PJM zonal loads and are mapped to the IEEE 30-bus system. The SVM is trained using normal data and randomly created LR attacks, and is tested against both random and intelligently designed LR attacks. The results show that the proposed detection framework can effectively detect LR attacks. Moreover, attack mitigation can be achieved by using the SVR predicted loads to re-dispatch generations.

5 citations


Posted Content
14 Mar 2020
TL;DR: It is shown that attacks designed using DCOPF fail to cause overflows on reliable systems because the system response modeled is inaccurate, and an ADBLP that accurately models the systemresponse is proposed to find the worst-case physical consequences, thereby modeling a strong attacker with system level knowledge.
Abstract: This paper demonstrates that false data injection (FDI) attacks are extremely limited in their ability to cause physical consequences on $N-1$ reliable power systems operating with real-time contingency analysis (RTCA) and security constrained economic dispatch (SCED). Prior work has shown that FDI attacks can be designed via an attacker-defender bi-level linear program (ADBLP) to cause physical overflows after re-dispatch using DCOPF. In this paper, it is shown that attacks designed using DCOPF fail to cause overflows on $N-1$ reliable systems because the system response modeled is inaccurate. An ADBLP that accurately models the system response is proposed to find the worst-case physical consequences, thereby modeling a strong attacker with system level knowledge. Simulation results on the synthetic Texas system with 2000 buses show that even with the new enhanced attacks, for systems operated conservatively due to $N-1$ constraints, the designed attacks only lead to post-contingency overflows. Moreover, the attacker must control a large portion of measurements and physically create a contingency in the system to cause consequences. Therefore, it is conceivable but requires an extremely sophisticated attacker to cause physical consequences on $N-1$ reliable power systems operated with RTCA and SCED.

4 citations


Proceedings ArticleDOI
11 Nov 2020
TL;DR: In this paper, a modified Benders' decomposition algorithm is proposed to solve the ADBLP on large power systems without converting it to a single-level mixed-integer linear program (MILP), which is numerically intractable for power systems with a large number of buses and branches.
Abstract: This paper studies the vulnerability of large-scale power systems to false data injection (FDI) attacks through their physical consequences. An attacker-defender bi-level linear program (ADBLP) can be used to determine the worst-case consequences of FDI attacks aiming to maximize the physical power flow on a target line. This ADBLP can be transformed into a single-level mixed-integer linear program (MILP), but it is numerically intractable for power systems with a large number of buses and branches. In this paper, a modified Benders’ decomposition algorithm is proposed to solve the ADBLP on large power systems without converting it to the MILP. Of more general interest, the proposed algorithm can be used to solve any ADBLP. Vulnerability of the IEEE 118-bus system and the Polish system with 2383 buses to FDI attacks is assessed using the proposed algorithm.

Proceedings ArticleDOI
01 Jun 2020
TL;DR: By coding over the on/off signal, a small shared randomness can be established without corruption by the jammer, and without interfering with the standard superposition coding strategy for the Gaussian broadcast channel.
Abstract: This paper considers the two-user Gaussian arbitrarily-varying broadcast channel, wherein a power-limited transmitter wishes to send a message to each of two receivers. Each receiver sees a superposition of the transmitter's sequence, Gaussian noise, and a signal from a power-limited malicious jammer. The jammer is assumed to know the code, but is oblivious to real-time transmissions. The exact capacity region of this setting is determined to be the capacity region of the standard Gaussian broadcast channel, but with the noise variance increased by the power of the jammer, as long as the received power of the jammer at each receiver is less than that of the legitimate transmitter. A key aspect of the achievable scheme involves sharing randomness from the transmitter to the receivers by breaking the transmitted sequence into segments, and either transmitting at full power in a segment, or sending zero. By coding over the on/off signal, a small shared randomness can be established without corruption by the jammer, and without interfering with the standard superposition coding strategy for the Gaussian broadcast channel.

Posted Content
TL;DR: A novel channel property is introduced, termed $U$ -overwritability, which allows the adversary to make its false message appear legitimate, and it is shown that if the capacity is nonzero, it is in fact equal to the no-adversary capacity, and this is true for a particular binary model.
Abstract: We consider keyless authentication for point-to-point communication in the presence of a myopic adversary. In particular, the adversary has access to a non-causal noisy version of the transmission and may use this knowledge to choose the channel state of an arbitrarily-varying channel between legitimate users; the receiver is successful if it either decodes to the correct message or correctly detects adversarial interference. We show that a channel condition called U-overwritability, which allows the adversary to make its false message appear legitimate and untampered with, is a sufficient condition for zero authentication capacity. We present a useful way to compare adversarial channels, and show that once an AVC becomes U-overwritable, it remains U-overwritable for all "less myopic" adversaries. Finally, we show that stochastic encoders are necessary for positive authentication capacity in some cases, and examine in detail a binary adversarial channel that illustrates this necessity. Namely, for this binary channel, we show that when the adversarial channel is degraded with respect to the main channel between users, the no-adversary capacity of the underlying channel is achievable with a deterministic encoder. Otherwise, provided the channel to the adversary is not perfect, a stochastic encoder is necessary for positive authentication capacity; if such an encoder is allowed, the no-adversary capacity is again achievable.