Journal ArticleDOI
Asymptotic performance of block quantizers with difference distortion measures
Reads0
Chats0
TLDR
Gersho's bounds on the asymptotic performance of block quantizers are valid for vector distortion measures that are powers of the Euclidean or l_{2} norm, and this generalization provides a k -dimensional generalization of Gish and Pierce's results for single-symbol quantizers.Abstract:
Gersho's bounds on the asymptotic (large rate or small distortion) performance of block quantizers are valid for vector distortion measures that are powers of the Euclidean or l_{2} norm. These results are generalized to difference distortion measures that are increasing functions of the seminorm of their argument, where any seminorm is allowed. This provides a k -dimensional generalization of Gish and Pierce's results for single-symbol quantizers. When the distortion measore is a power of a seminorm the bounds are shown to be strictly better than the corresponding bounds provided by the k th-order rate-distortion functions.read more
Citations
More filters
Journal ArticleDOI
An Algorithm for Vector Quantizer Design
Y. Linde,A. Buzo,Robert M. Gray +2 more
TL;DR: An efficient and intuitive algorithm is presented for the design of vector quantizers based either on a known probabilistic model or on a long training sequence of data.
Journal ArticleDOI
Quantization
Robert M. Gray,David L. Neuhoff +1 more
TL;DR: The key to a successful quantization is the selection of an error criterion – such as entropy and signal-to-noise ratio – and the development of optimal quantizers for this criterion.
Journal ArticleDOI
Vector quantization in speech coding
John Makhoul,S. Roucos,H. Gish +2 more
TL;DR: This tutorial review presents the basic concepts employed in vector quantization and gives a realistic assessment of its benefits and costs when compared to scalar quantization, and focuses primarily on the coding of speech signals and parameters.
Journal ArticleDOI
Trellis coded quantization of memoryless and Gauss-Markov sources
TL;DR: The authors adopt the notions of signal set expansion, set partitioning, and branch labeling of TCM, but modify the techniques to account for the source distribution, to design TCQ coders of low complexity with excellent mean-squared-error (MSE) performance.
Journal ArticleDOI
Asymptotic quantization error of continuous signals and the quantization dimension
TL;DR: Extensions of the limiting qnanfizafion error formula of Bennet are proved and random quantization, optimal quantization in the presence of an output information constraint, and quantization noise in high dimensional spaces are investigated.
References
More filters
Journal ArticleDOI
Spectra of quantized signals
TL;DR: Quantizing of time, or time division, has found application as a means of multiplexing telephone channels and the more familiar word “sampling” will be used here interchangeably with the rather formidable term “quantization of time”.
Journal ArticleDOI
Asymptotically optimal block quantization
TL;DR: A heuristic argument generalizing Bennett's formula to block quantization where a vector of random variables is quantized is given, leading to a rigorous method for obtaining upper bounds on the minimum distortion for block quantizers.
Journal ArticleDOI
Asymptotically efficient quantizing
TL;DR: It is shown that uniform quantizing yields an output entropy which asymptotically is smaller than that for any other quantizer, independent of the density function or the error criterion, and the discrepancy between the entropy of the uniform quantizer and the rate distortion function apparently lies with the inability of the optimal quantizing shapes to cover large dimensional spaces without overlap.
Journal ArticleDOI
Useful Approximations to Optimum Quantization
TL;DR: In this paper, approximate methods are presented which, for a given number of quantization levels, reduce the determination of the optimum quantizer to a one-dimensional problem, which allows a simple discussion of the merits of nonuniform quantization.
Journal ArticleDOI
Comparison of optimal quantizations of speech reflection coefficients
A. Gray,Robert M. Gray,J. Markel +2 more
TL;DR: Four quantization schemes for the reflection coefficients obtained from linear prediction speech analysis are theoretically compared and it is shown that a fixed bit rate minimum deviation quantization without noiseless source coding can realize within 0.26 bits per coefficient, the ultimate bit rate achievable by allowing noisless source coding of the data.