Measuring Information Leakage Using Generalized Gain Functions
read more
Citations
An operational measure of information leakage
Privacy Games: Optimal User-Centric Data Obfuscation
An Operational Approach to Information Leakage
References
Elements of information theory
Data Mining: Practical Machine Learning Tools and Techniques
Bayesian Estimation and Prediction Using Asymmetric Loss Functions
Loss function‐based evaluation of DSGE models
Related Papers (5)
Frequently Asked Questions (11)
Q2. What are the future works mentioned in the paper "Measuring information leakage using generalized gain functions" ?
The authors also proved important mathematical properties of g-leakage that further attest to the significance of their framework. As future work the authors intend to identify algorithms to calculate g-capacity, possibly using linear programming. Also, it would be interesting to extend g-leakage to the scenario where the adversary does not know the prior π, but instead has ( possibly incorrect ) beliefs about it, as in the works of Clarkson, Myers, and Scheider [ 31 ] and Hamadou, Sassone, and Palamidessi [ 32 ]. The authors are grateful to Miguel E. Andrés for discussions of this work, and to the anonymous referees for their comments and suggestions.
Q3. What is the reason for the factorization of C2?
On the assumption that C2’s columns are linearly independent, the linearly independent rows of C2 form an invertible matrix, and so the authors are done by Theorem 6.5.
Q4. What is the simplest way to explain the Shannon leakage?
For they tell us that if the authors can show that the mincapacity of C is small, then the authors are guaranteed that the leakage under any gain function g and under any prior π is also small, as is the Shannon leakage.
Q5. What is the expected gain of every element of W?
Under the uniform prior π, it is easy to see that the expected gain of every element (u, x) of W is 2−10, since for every u, X [u] is uniformly distributed on [0..1023].
Q6. What is the guess under output z?
since X+ is the best guess a priori, the authors conclude by Theorem 4.2 that Lg(π,C2) > 0.Lemma 6.4 allows us to prove some significant special cases of Conjecture 6.3, as the authors now show.
Q7. What is the exact setW of guesses?
Note that the exact setW of guesses is not important, as any gain function with n possible guesses can be represented by a n×|X | matrix G. First, from Theorem 6.9 (adapted to ≤Gn instead of ≤G), the authors know that C1 ≤Gn C2 iff Lg(πu, C1) ≤ Lg(πu, C2) for all g ∈ Gn.
Q8. What is the order of Bayes risk?
This order is sound and complete for Bayes risk, and they show that Bayes risk is maximally discerning, if contexts are taken into account, when compared to the alternative elementary tests of marginal guesswork, guessing entropy and Shannon entropy.
Q9. What is the prior that realizes gd-capacity?
Now if the authors consider the prior π′ = (0.5, 0.5, 0), the authors find that Vgd(π′) = 0.5, pY = (0.3, 0.7), Vgd(pX|y1) = 1, Vgd(pX|y2) = 5 7 , and Vgd(π′,Ex5) = 0.8, which gives Lgd(π′,Ex5) = log 1.6 ≈ 0.6781.
Q10. What is the simplest way to prove that C2 is invertible?
Note that ≤G ⊆ ≤G2 ; the above theorem shows that in the case when C2 is invertible, the conjecture holds even if the authors restrict to 2-block gain functions.
Q11. What is the implication of the solution to the problem of maximizing tr(D?
Recall from Section IV-C that Vg(π,C1) is the solution to the problem of maximizing tr(DπC1SG) subject to S being a channel matrix.