scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Design of error-free perfect secrecy system by prefix codes and partition codes

01 Jul 2012-pp 1593-1597
TL;DR: This paper investigates how to design an error-free and perfectly secure crypto-system and introduces an approach based on prefix codes where the key consumption is minimum for fixed number of channel uses and an optimum partition code is introduced.
Abstract: We investigate how to design an error-free and perfectly secure crypto-system. In particular, we are interested in the efficiency of an EPS system. A approach based on prefix codes is introduced. Also an optimum partition code is introduced where the key consumption is minimum for fixed number of channel uses. Results obtained in this paper can also be applied to study the tradeoff between the key consumption and the number of channel uses needed to transmit the encrypted message.
Citations
More filters
Journal ArticleDOI
01 Jan 2017
TL;DR: The mathematical framework for the analysis of visible light positioning systems is built, which can provide a provable and secure wireless communication capability and a novel authentication system based on users' locations.
Abstract: Lighting systems are undergoing a revolution from fluorescent lamps or tubes to light emitting diodes that offer greater energy efficiency and a longer lifetime. This paper considers using these light sources to provide other benefits. The proposed system has low deployment cost and most importantly, it uses the unique characteristics of visible light to complement existing wireless communication systems. This paper outlines a system design with three features. First, it can provide a provable and secure wireless communication capability. In particular, a user is authenticated according to his or her location. Second, it includes indoor positioning systems with high precision. Finally, it also supports information broadcast with a high frequency reuse factor. This paper provides both experimental and theoretical results. We build the mathematical framework for the analysis of visible light positioning systems. Experimental results and analysis are given to explain the concerns of light sensors in a mobile phone for building an indoor positioning system. A novel authentication system based on users' locations is proposed. Copyright © 2015 John Wiley & Sons, Ltd.

21 citations

Proceedings ArticleDOI
01 Jun 2020
TL;DR: There are significant differences in secret key/secrecy tradeoffs between lossless and almost-lossless compression under perfect secrecy, secrecy by design, maximal leakage, and local differential privacy.
Abstract: The relationship between secrecy, compression rate, and shared secret key rate is surveyed under perfect secrecy, equivocation, maximal leakage, local differential privacy, and secrecy by design. It is emphasized that the utility cost of jointly compressing and securing data is very sensitive to (a) the adopted secrecy metric and (b) the specifics of the compression setting. That is, although it is well-known that the fundamental limits of traditional lossless variable-length compression and almost-lossless fixed-length compression are intimately related, this relationship collapses for many secrecy measures. The asymptotic fundamental limit of almost-lossless fixed length compression remains entropy for all secrecy measures studied. However, the fundamental limits of lossless variable-length compression are no longer entropy under perfect secrecy, secrecy by design, and sometimes under local differential privacy. Moreover, there are significant differences in secret key/secrecy tradeoffs between lossless and almost-lossless compression under perfect secrecy, secrecy by design, maximal leakage, and local differential privacy.

14 citations

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the combination between causal/zero-delay source coding and information-theoretic secrecy and derived bounds on the key rate and coding rate needed for perfect zero-delay secrecy.
Abstract: We investigate the combination between causal/zero-delay source coding and information-theoretic secrecy. Two source coding models with secrecy constraints are considered. We start by considering zero-delay perfectly secret lossless transmission of a memoryless source. We derive bounds on the key rate and coding rate needed for perfect zero-delay secrecy. In this setting, we consider two models that differ by the ability of the eavesdropper to parse the bit-stream passing from the encoder to the legitimate decoder into separate messages. We also consider causal source coding with a fidelity criterion and side information at the decoder and the eavesdropper. Unlike the zero-delay setting where variable-length coding is traditionally used but might leak information on the source through the length of the codewords, in this setting, since delay is allowed, block coding is possible. We show that in this setting, a separation of encryption and causal source coding is optimal.

12 citations

Posted Content
TL;DR: It is shown that in this setting, a separation of encryption and causal source coding is optimal, and bounds on the key rate and coding rate needed for perfect zero-delay secrecy are derived.
Abstract: We investigate the combination between causal/zero-delay source coding and information-theoretic secrecy. Two source coding models with secrecy constraints are considered. We start by considering zero-delay perfectly secret lossless transmission of a memoryless source. We derive bounds on the key rate and coding rate needed for perfect zero-delay secrecy. In this setting, we consider two models which differ by the ability of the eavesdropper to parse the bit-stream passing from the encoder to the legitimate decoder into separate messages. We also consider causal source coding with a fidelity criterion and side information at the decoder and the eavesdropper. Unlike the zero-delay setting where variable-length coding is traditionally used but might leak information on the source through the length of the codewords, in this setting, since delay is allowed, block coding is possible. We show that in this setting, separation of encryption and causal source coding is optimal.

7 citations


Additional excerpts

  • ...I T ]...

    [...]

Journal ArticleDOI
02 Feb 2021
TL;DR: In this paper, the relationship among secrecy, compression rate and shared secret key rate in lossless data compression is studied through the lenses of perfect secrecy, mutual information leakage, maximal leakage, local differential privacy, and secrecy by design.
Abstract: The relationships among secrecy, compression rate and shared secret key rate in lossless data compression are studied through the lenses of perfect secrecy, mutual information leakage, maximal leakage, local differential privacy, and secrecy by design. It is revealed that the utility cost of jointly compressing and securing data is very sensitive to the adopted secrecy metric and the specifics of the compression setting. That is, although it is well-known that the fundamental limits of traditional lossless variable-length compression and almost-lossless fixed-length compression are intimately related, this relationship collapses for many secrecy measures. The asymptotic fundamental limit of almost-lossless fixed-length compression remains entropy for all secrecy measures studied. However, the fundamental limit of lossless variable-length compression is no longer entropy under perfect secrecy, secrecy by design, or local differential privacy. Moreover, there are significant differences in secret key/secrecy tradeoffs between lossless and almost-lossless compression under perfect secrecy, secrecy by design, maximal leakage, and local differential privacy.

1 citations

References
More filters
Book
01 Jan 1991
TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Abstract: Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index.

45,034 citations

Book
06 Apr 2011
TL;DR: In this paper, Doubly Stochastic Matrices and Schur-Convex Functions are used to represent matrix functions in the context of matrix factorizations, compounds, direct products and M-matrices.
Abstract: Introduction.- Doubly Stochastic Matrices.- Schur-Convex Functions.- Equivalent Conditions for Majorization.- Preservation and Generation of Majorization.- Rearrangements and Majorization.- Combinatorial Analysis.- Geometric Inequalities.- Matrix Theory.- Numerical Analysis.- Stochastic Majorizations.- Probabilistic, Statistical, and Other Applications.- Additional Statistical Applications.- Orderings Extending Majorization.- Multivariate Majorization.- Convex Functions and Some Classical Inequalities.- Stochastic Ordering.- Total Positivity.- Matrix Factorizations, Compounds, Direct Products, and M-Matrices.- Extremal Representations of Matrix Functions.

6,641 citations

Journal ArticleDOI
TL;DR: A tight upper bound on the conditional entropy of X given Y in terms of the error probability and the marginal distribution of X is given and a new lower bound for countably infinite alphabets is found.
Abstract: Fano's inequality relates the error probability of guessing a finitely-valued random variable X given another random variable Y and the conditional entropy of X given Y. It is not necessarily tight when the marginal distribution of X is fixed. This paper gives a tight upper bound on the conditional entropy of X given Y in terms of the error probability and the marginal distribution of X. A new lower bound on the conditional entropy for countably infinite alphabets is also found. The relationship between the reliability criteria of vanishing error probability and vanishing conditional entropy is also discussed. A strengthened form of the Schur-concavity of entropy which holds for finite or countably infinite random variables is given.

100 citations

Journal ArticleDOI
TL;DR: The relation between the Shannon entropy and variational distance, two fundamental and frequently-used quantities in information theory, is studied by means of certain bounds on the entropy difference between two probability distributions in terms of the variationaldistance between them and their alphabet sizes.
Abstract: The relation between the Shannon entropy and variational distance, two fundamental and frequently-used quantities in information theory, is studied in this paper by means of certain bounds on the entropy difference between two probability distributions in terms of the variational distance between them and their alphabet sizes. We also show how to find the distribution achieving the minimum (or maximum) entropy among those distributions within a given variational distance from any given distribution. These results are applied to solve a number of problems that are of fundamental interest. For entropy estimation, we obtain an analytic formula for the confidence interval, solving a problem that has been opened for more than 30 years. For approximation of probability distributions, we find the minimum entropy difference between two distributions in terms of their alphabet sizes and the variational distance between them. In particular, we show that the entropy difference between two distributions that are close in variational distance can be arbitrarily large if the alphabet sizes of the two distributions are unconstrained. For random number generation, we characterize the tradeoff between the amount of randomness required and the distortion in terms of variation distance. New tools for non-convex optimization have been developed to establish the results in this paper.

96 citations

Proceedings ArticleDOI
24 Jun 2007
TL;DR: The relation between the Shannon entropy and variational distance, two fundamental and frequently-used quantities in information theory, is studied by means of certain bounds on the entropy difference between two probability distributions in terms of the variationaldistance between them and their alphabet sizes.
Abstract: For two probability distributions with finite alphabets, a small variational distance between them does not imply that the difference between their entropies is small if one of the alphabet sizes is unknown. This fact, seemingly contradictory to the continuity of entropy for finite alphabet, is clarified in the current paper by means of certain bounds on the entropy difference between two probability distributions in terms of the variational distance between them and their alphabet sizes. These bounds are shown to be the tightest possible. The Lagrange multiplier cannot be applied here because the variational distance is not differentiable. We also show how to find the distribution achieving the minimum (or maximum) entropy among those distributions within a given variational distance from any given distribution. The results show the limitation of certain algorithms for entropy estimation. An upper bound is obtained for the rate-distortion function with respect to the error frequency criterion, and the minimal average complexity is determined for the generation of a probability distribution with a distortion criterion.

45 citations