scispace - formally typeset
Search or ask a question
Author

Mokshay Madiman

Other affiliations: Yale University, Brown University
Bio: Mokshay Madiman is an academic researcher from University of Delaware. The author has contributed to research in topics: Entropy power inequality & Rényi entropy. The author has an hindex of 24, co-authored 102 publications receiving 1941 citations. Previous affiliations of Mokshay Madiman include Yale University & Brown University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, a new family of Fisher information and entropy power inequalities for sums of independent random variables are presented, which relate the information in the sum of n independent variables to the information contained in sums over subsets of the random variables, for an arbitrary collection of subsets.
Abstract: New families of Fisher information and entropy power inequalities for sums of independent random variables are presented. These inequalities relate the information in the sum of n independent random variables to the information contained in sums over subsets of the random variables, for an arbitrary collection of subsets. As a consequence, a simple proof of the monotonicity of information in central limit theorems is obtained, both in the setting of independent and identically distributed (i.i.d.) summands as well as in the more general setting of independent summands with variance-standardized sums.

177 citations

Journal ArticleDOI
TL;DR: In this paper, the joint entropy of a collection of random variables in terms of an arbitrary collection of subset joint entropies is studied and a duality between the upper and lower bounds for joint entropy is developed.
Abstract: Upper and lower bounds are obtained for the joint entropy of a collection of random variables in terms of an arbitrary collection of subset joint entropies. These inequalities generalize Shannon's chain rule for entropy as well as inequalities of Han, Fujishige, and Shearer. A duality between the upper and lower bounds for joint entropy is developed. All of these results are shown to be special cases of general, new results for submodular functions-thus, the inequalities presented constitute a richly structured class of Shannon-type inequalities. The new inequalities are applied to obtain new results in combinatorics, such as bounds on the number of independent sets in an arbitrary graph and the number of zero-error source-channel codes, as well as determinantal inequalities in matrix theory. A general inequality for relative entropies is also developed. Finally, revealing connections of the results to literature in economics, computer science, and physics are explored.

157 citations

Book ChapterDOI
TL;DR: This work surveys various recent developments on forward and reverse entropy power inequalities not just for the Shannon-Boltzmann entropy but also more generally for Renyi entropy and discusses connections between the so-called functional and probabilistic analogues of some classical inequalities in geometric functional analysis.
Abstract: The entropy power inequality, which plays a fundamental role in information theory and probability, may be seen as an analogue of the Brunn-Minkowski inequality. Motivated by this connection to Convex Geometry, we survey various recent developments on forward and reverse entropy power inequalities not just for the Shannon-Boltzmann entropy but also more generally for Renyi entropy. In the process, we discuss connections between the so-called functional (or integral) and probabilistic (or entropic) analogues of some classical inequalities in geometric functional analysis.

95 citations

Journal ArticleDOI
TL;DR: This paper developed a reverse entropy power inequality for convex measures, which may be seen as an anegeometric inverse of the entropy power inequalities of Shannon and Stam for log-concave measures, as well as a version of Milman's reverse Brunn-Minkowski inequality.

92 citations

Journal ArticleDOI
TL;DR: In this paper, it was shown that the entropy per coordinate in a log-concave random vector of any dimension with given density at the mode has a range of just 1.
Abstract: The entropy per coordinate in a log-concave random vector of any dimension with given density at the mode is shown to have a range of just 1. Uniform distributions on convex bodies are at the lower end of this range, the distribution with i.i.d. exponentially distributed coordinates is at the upper end, and the normal is exactly in the middle. Thus, in terms of the amount of randomness as measured by entropy per coordinate, any log-concave random vector of any dimension contains randomness that differs from that in the normal random variable with the same maximal density value by at most 1/2. As applications, we obtain an information-theoretic formulation of the famous hyperplane conjecture in convex geometry, entropy bounds for certain infinitely divisible distributions, and quantitative estimates for the behavior of the density at the mode on convolution. More generally, one may consider so-called convex or hyperbolic probability measures on Euclidean spaces; we give new constraints on entropy per coordinate for this class of measures, which generalize our results under the log-concavity assumption, expose the extremal role of multivariate Pareto-type distributions, and give some applications.

92 citations


Cited by
More filters
Book ChapterDOI
01 Jan 2011
TL;DR: Weakconvergence methods in metric spaces were studied in this article, with applications sufficient to show their power and utility, and the results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables.
Abstract: The author's preface gives an outline: "This book is about weakconvergence methods in metric spaces, with applications sufficient to show their power and utility. The Introduction motivates the definitions and indicates how the theory will yield solutions to problems arising outside it. Chapter 1 sets out the basic general theorems, which are then specialized in Chapter 2 to the space C[0, l ] of continuous functions on the unit interval and in Chapter 3 to the space D [0, 1 ] of functions with discontinuities of the first kind. The results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables. " The book develops and expands on Donsker's 1951 and 1952 papers on the invariance principle and empirical distributions. The basic random variables remain real-valued although, of course, measures on C[0, l ] and D[0, l ] are vitally used. Within this framework, there are various possibilities for a different and apparently better treatment of the material. More of the general theory of weak convergence of probabilities on separable metric spaces would be useful. Metrizability of the convergence is not brought up until late in the Appendix. The close relation of the Prokhorov metric and a metric for convergence in probability is (hence) not mentioned (see V. Strassen, Ann. Math. Statist. 36 (1965), 423-439; the reviewer, ibid. 39 (1968), 1563-1572). This relation would illuminate and organize such results as Theorems 4.1, 4.2 and 4.4 which give isolated, ad hoc connections between weak convergence of measures and nearness in probability. In the middle of p. 16, it should be noted that C*(S) consists of signed measures which need only be finitely additive if 5 is not compact. On p. 239, where the author twice speaks of separable subsets having nonmeasurable cardinal, he means "discrete" rather than "separable." Theorem 1.4 is Ulam's theorem that a Borel probability on a complete separable metric space is tight. Theorem 1 of Appendix 3 weakens completeness to topological completeness. After mentioning that probabilities on the rationals are tight, the author says it is an

3,554 citations

Book
16 Jan 2012
TL;DR: In this article, a comprehensive treatment of network information theory and its applications is provided, which provides the first unified coverage of both classical and recent results, including successive cancellation and superposition coding, MIMO wireless communication, network coding and cooperative relaying.
Abstract: This comprehensive treatment of network information theory and its applications provides the first unified coverage of both classical and recent results. With an approach that balances the introduction of new models and new coding techniques, readers are guided through Shannon's point-to-point information theory, single-hop networks, multihop networks, and extensions to distributed computing, secrecy, wireless communication, and networking. Elementary mathematical tools and techniques are used throughout, requiring only basic knowledge of probability, whilst unified proofs of coding theorems are based on a few simple lemmas, making the text accessible to newcomers. Key topics covered include successive cancellation and superposition coding, MIMO wireless communication, network coding, and cooperative relaying. Also covered are feedback and interactive communication, capacity approximations and scaling laws, and asynchronous and random access channels. This book is ideal for use in the classroom, for self-study, and as a reference for researchers and engineers in industry and academia.

2,442 citations

Journal ArticleDOI

2,415 citations

Book
01 Jan 2013
TL;DR: In this paper, the authors consider the distributional properties of Levy processes and propose a potential theory for Levy processes, which is based on the Wiener-Hopf factorization.
Abstract: Preface to the revised edition Remarks on notation 1. Basic examples 2. Characterization and existence 3. Stable processes and their extensions 4. The Levy-Ito decomposition of sample functions 5. Distributional properties of Levy processes 6. Subordination and density transformation 7. Recurrence and transience 8. Potential theory for Levy processes 9. Wiener-Hopf factorizations 10. More distributional properties Supplement Solutions to exercises References and author index Subject index.

1,957 citations