scispace - formally typeset
Search or ask a question
Author

Liyao Wang

Bio: Liyao Wang is an academic researcher from J.P. Morgan & Co.. The author has contributed to research in topics: Rényi entropy & Elementary proof. The author has an hindex of 2, co-authored 2 publications receiving 66 citations.

Papers
More filters
Book ChapterDOI
TL;DR: An elementary proof is provided of sharp bounds for the varentropy of random vectors with log-concave densities, as well as for deviations of the information content from its mean.
Abstract: An elementary proof is provided of sharp bounds for the varentropy of random vectors with log-concave densities, as well as for deviations of the information content from its mean. These bounds significantly improve on the bounds obtained by Bobkov and Madiman (Ann Probab 39(4):1528–1543, 2011).

57 citations

Journal ArticleDOI
TL;DR: In this paper, a natural link between the notions of majorization and strongly Sperner posets is elucidated, which is then used to obtain a variety of consequences including new Renyi entropy inequalities for sums of independent, integer-valued random variables.

19 citations


Cited by
More filters
Journal Article
TL;DR: It is shown that the Unadjusted Langevin Algorithm can be formulated as a first order optimization algorithm of an objective functional defined on the Wasserstein space of order $2$ and a non-asymptotic analysis of this method to sample from logconcave smooth target distribution is given.
Abstract: In this paper, we provide new insights on the Unadjusted Langevin Algorithm. We show that this method can be formulated as a first order optimization algorithm of an objective functional defined on the Wasserstein space of order $2$. Using this interpretation and techniques borrowed from convex optimization, we give a non-asymptotic analysis of this method to sample from logconcave smooth target distribution on $\mathbb{R}^d$. Based on this interpretation, we propose two new methods for sampling from a non-smooth target distribution, which we analyze as well. Besides, these new algorithms are natural extensions of the Stochastic Gradient Langevin Dynamics (SGLD) algorithm, which is a popular extension of the Unadjusted Langevin Algorithm. Similar to SGLD, they only rely on approximations of the gradient of the target log density and can be used for large-scale Bayesian inference.

144 citations

Book ChapterDOI
TL;DR: This work surveys various recent developments on forward and reverse entropy power inequalities not just for the Shannon-Boltzmann entropy but also more generally for Renyi entropy and discusses connections between the so-called functional and probabilistic analogues of some classical inequalities in geometric functional analysis.
Abstract: The entropy power inequality, which plays a fundamental role in information theory and probability, may be seen as an analogue of the Brunn-Minkowski inequality. Motivated by this connection to Convex Geometry, we survey various recent developments on forward and reverse entropy power inequalities not just for the Shannon-Boltzmann entropy but also more generally for Renyi entropy. In the process, we discuss connections between the so-called functional (or integral) and probabilistic (or entropic) analogues of some classical inequalities in geometric functional analysis.

95 citations

Posted Content
TL;DR: In this paper, a generalization of the Renyi entropy power inequality for sums of independent random vectors was shown to generalize to groups of polynomials, where the result can be combined with rearrangement inequalities for certain linear images.
Abstract: General extensions of an inequality due to Rogozin, concerning the essential supremum of a convolution of probability density functions on the real line, are obtained. While a weak version of the inequality is proved in the very general context of Polish $\sigma$-compact groups, particular attention is paid to the group \(\mathbb{R}^d\), where the result can combined with rearrangement inequalities for certain linear images for a strong generalization. As a consequence, we obtain a unification and sharpening of both the \(\infty\)-Renyi entropy power inequality for sums of independent random vectors, due to Bobkov and Chistyakov, and the bounds on marginals of projections of product measures due to Rudelson and Vershynin (matching and extending the sharp improvement of Livshyts, Paouris and Pivovarov). The proof is elementary and relies on a characterization of extreme points of a class of probability measures in the general setting of Polish measure spaces, as well as the development of a generalization of Ball's cube slicing bounds for products of \(d\)-dimensional Euclidean balls (where the "co-dimension 1" case had been recently settled by Brzezinski).

43 citations

Journal ArticleDOI
TL;DR: A lower bound on the differential entropy of a log-concave random variable X in terms of the p-th absolute moment of X is derived, which leads to a reverse entropy power inequality with an explicit constant, and to new bounds on the rate-distortion function and the channel capacity.
Abstract: We derive a lower bound on the differential entropy of a log-concave random variable $X$ in terms of the $p$-th absolute moment of $X$. The new bound leads to a reverse entropy power inequality with an explicit constant, and to new bounds on the rate-distortion function and the channel capacity. Specifically, we study the rate-distortion function for log-concave sources and distortion measure $| x - \hat x|^r$, and we establish that the difference between the rate distortion function and the Shannon lower bound is at most $\log(\sqrt{\pi e}) \approx 1.5$ bits, independently of $r$ and the target distortion $d$. For mean-square error distortion, the difference is at most $\log (\sqrt{\frac{\pi e}{2}}) \approx 1$ bits, regardless of $d$. We also provide bounds on the capacity of memoryless additive noise channels when the noise is log-concave. We show that the difference between the capacity of such channels and the capacity of the Gaussian channel with the same noise power is at most $\log (\sqrt{\frac{\pi e}{2}}) \approx 1$ bits. Our results generalize to the case of vector $X$ with possibly dependent coordinates, and to $\gamma$-concave random variables. Our proof technique leverages tools from convex geometry.

38 citations