Journal ArticleDOI
On characterization of entropy function via information inequalities
Zhen Zhang,Raymond W. Yeung +1 more
- Vol. 44, Iss: 4, pp 1440-1452
Reads0
Chats0
TLDR
The main discovery of this paper is a new information-theoretic inequality involving four discrete random variables which gives a negative answer to this fundamental problem in information theory: /spl Gamma/~*/sub n/ is strictly smaller than /spl gamma// Sub n/ whenever n>3.Abstract:
Given n discrete random variables /spl Omega/={X/sub 1/, /spl middot//spl middot//spl middot/, X/sub n/}, associated with any subset /spl alpha/ of (1, 2, /spl middot//spl middot//spl middot/, n), there is a joint entropy H(X/sub /spl alpha//) where X/sub /spl alpha//={X/sub i/:i/spl epsiv//spl alpha/}. This can be viewed as a function defined on 2/sup {1, 2, /spl middot//spl middot//spl middot/, n}/ taking values in (0, +/spl infin/). We call this function the entropy function of /spl Omega/. The nonnegativity of the joint entropies implies that this function is nonnegative; the nonnegativity of the conditional joint entropies implies that this function is nondecreasing; and the nonnegativity of the conditional mutual information implies that this function has the following property: for any two subsets /spl alpha/ and /spl beta/ of {1, 2, /spl middot//spl middot//spl middot/, n} H/sub /spl Omega//(/spl alpha/)+H/sub /spl Omega//(/spl beta/)/spl ges/H/sub /spl Omega//(/spl alpha//spl cup//spl beta/)+H/sub /spl Omega//(/spl alpha//spl cap//spl beta/). These properties are the so-called basic information inequalities of Shannon's information measures. Do these properties fully characterize the entropy function? To make this question more precise, we view an entropy function as a 2/sup n/-1-dimensional vector where the coordinates are indexed by the nonempty subsets of the ground set {1, 2, /spl middot//spl middot//spl middot/, n}. Let /spl Gamma//sub n/ be the cone in R/sup 2n-1/ consisting of all vectors which have these three properties when they are viewed as functions defined on 2/sup {1, 2, /spl middot//spl middot//spl middot/, n}/. Let /spl Gamma//sub n/* be the set of all 2/sup n/-1-dimensional vectors which correspond to the entropy functions of some sets of n discrete random variables. The question can be restated as: is it true that for any n, /spl Gamma/~/sub n/*=/spl Gamma//sub n/? Here /spl Gamma/~/sub n/* stands for the closure of the set /spl Gamma//sub n/*. The answer is "yes" when n=2 and 3 as proved in our previous work. Based on intuition, one may tend to believe that the answer should be "yes" for any n. The main discovery of this paper is a new information-theoretic inequality involving four discrete random variables which gives a negative answer to this fundamental problem in information theory: /spl Gamma/~*/sub n/ is strictly smaller than /spl Gamma//sub n/ whenever n>3. While this new inequality gives a nontrivial outer bound to the cone /spl Gamma/~/sub 4/*, an inner bound for /spl Gamma/~*/sub 4/ is also given. The inequality is also extended to any number of random variables.read more
Citations
More filters
Book ChapterDOI
Secret-sharing schemes: a survey
TL;DR: This survey describes the most important constructions of secret-sharing schemes and explains the connections between secret- sharing schemes and monotone formulae and monOTone span programs, and presents the known lower bounds on the share size.
Book
A First Course in Information Theory
TL;DR: This book provides the first comprehensive treatment of the theory of I-Measure, network coding theory, Shannon and non-Shannon type information inequalities, and a relation between entropy and group theory.
Book
Network Coding Theory
TL;DR: This chapter discusses network Coding and Algebraic Coding, which focuses on Acyclic Networks, and the Fundamental Limits of Linear Codes, which addresses these issues in more detail.
Posted Content
Learning with Submodular Functions: A Convex Optimization Perspective
TL;DR: Submodular functions are relevant to machine learning for at least two reasons: (1) some problems may be expressed directly as the optimization of submodular function and (2) the lovasz extension of sub-modular Functions provides a useful set of regularization functions for supervised and unsupervised learning as discussed by the authors.
Book
Learning with Submodular Functions: A Convex Optimization Perspective
TL;DR: In Learning with Submodular Functions: A Convex Optimization Perspective, the theory of submodular functions is presented in a self-contained way from a convex analysis perspective, presenting tight links between certain polyhedra, combinatorial optimization and convex optimization problems.
References
More filters
Book
Probability, random variables and stochastic processes
TL;DR: This chapter discusses the concept of a Random Variable, the meaning of Probability, and the axioms of probability in terms of Markov Chains and Queueing Theory.
Book
The Theory of Error-Correcting Codes
TL;DR: This book presents an introduction to BCH Codes and Finite Fields, and methods for Combining Codes, and discusses self-dual Codes and Invariant Theory, as well as nonlinear Codes, Hadamard Matrices, Designs and the Golay Code.
Book
Information Theory: Coding Theorems for Discrete Memoryless Systems
I. Csiszar,János Körner +1 more
TL;DR: This new edition presents unique discussions of information theoretic secrecy and of zero-error information theory, including the deep connections of the latter with extremal combinatorics.