scispace - formally typeset
Search or ask a question

Showing papers by "Iain M. Johnstone published in 2007"


01 Jan 2007
TL;DR: The Kolmogorov ǫ-entropy is asymptotically equivalent to the maximum Rate-Distortion R(D,X) over all stochastic processes X with sample paths in W 2,0(γ), where the calibration D = Ǭ is made to make the calibration of members of this family of Gaussian processes D the highest rate-distortion function.
Abstract: Shannon’s Rate-Distortion Theory describes the number of bits needed to approximately represent typical realizations of a stochastic process X = (X(t) : t ∈ T ), while Kolmogorov’s ǫ-entropy describes the number of bits needed to approximately represent an arbitrary member f = (f(t) : t ∈ T ) of a functional class F . For many stochastic processes a great deal is known about the behavior of the rate distortion function, while for few functional classes F has there been success in determining, say, the precise asymptotics of the ǫ-entropy. Let W 2,0(γ) denote the class of functions f(t) on T = [0, 2π) with periodic boundary conditions and 1 2π ∫ 2π 0 f(t)dt + 1 2π ∫ 2π 0 f (t)dt ≤ γ. We show that for approximating functions of this class in L norm we have the precise asymptotics of the Kolmogorov ǫentropy: Hǫ(W m 2,0(γ)) ∼ 2m(log2 e)(γ/2ǫ) , ǫ → 0. (0.1) This follows from a connection between the Shannon and Kolmogorov theories, which allows us to exploit the powerful formalism of Shannon’s Rate-Distortion theory to obtain information about the Kolmogorov ǫ-entropy. In fact, the Kolmogorov ǫ-entropy is asymptotically equivalent, as ǫ → 0, to the maximum Rate-Distortion R(D,X) over all stochastic processes X with sample paths in W 2,0(γ), where we make the calibration D = ǫ . There is a family of Gaussian processes X∗ D which asymptotically, as D → 0, take realizations in W 2,0(γ), and for which the process at index D has essentially the highest rate-distortion R(D,X) of all processes X living in W 2,0(γ). We evaluate the rate-distortion function of members of this family, giving formula (0.1). These results strongly parallel a key result in modern statistical decision theory, Pinsker’s theorem. This points to a connection between theories of statistical estimation and data compression, which will be the theme of these Lectures.

14 citations