scispace - formally typeset
Journal ArticleDOI

A comparison of the Z, E/sub 8/, and Leech lattices for quantization of low-shape-parameter generalized Gaussian sources

TLDR
This work has calculated the distortion associated with Z, E/sub 8/, and Leech (1967) lattice quantization for coding of generalized Gaussian sources and shows that for low bit rates, the Z lattice offers both the best performance and the lowest implementational complexity.
Abstract
In lattice vector quantization, the distortion associated with a given lattice is often expressed in terms of the G number, which is a measure of the mean square error per dimension generated by quantization of a uniform source. Subband image coefficients, however, are best modeled by a generalized Gaussian distribution, leading to distortion characteristics that are quite different from those encountered for uniform, Laplacian, or Gaussian sources. We have calculated the distortion associated with Z, E/sub 8/, and Leech (1967) lattice quantization for coding of generalized Gaussian sources and show that for low bit rates, the Z lattice offers both the best performance and the lowest implementational complexity. >

read more

Citations
More filters
Journal ArticleDOI

PDF optimized parametric vector quantization of speech line spectral frequencies

TL;DR: A low complexity quantization scheme using transform coding and bit allocation techniques which allows for easy mapping from observation to quantized value is developed for both fixed rate and variable rate systems.
Journal ArticleDOI

Bayesian learning of finite generalized Gaussian mixture models on images

TL;DR: A method to evaluate the posterior distribution and Bayes estimators using a Gibbs sampling algorithm and is validated by applying it to: synthetic data, real datasets, texture classification and retrieval, and image segmentation; while comparing it to different other approaches.
Journal ArticleDOI

Lattice vector quantization of generalized Gaussian sources

TL;DR: Algorithm for defining and quantizing to a Z lattice in which the boundary is optimized to the characteristics of GG sources lead to high performance and low complexity for bit rates and dimensions that are of interest in a number of practical coding applications.
Book ChapterDOI

Bayesian learning of generalized gaussian mixture models on biomedical images

TL;DR: A highly efficient unsupervised Bayesian algorithm for biomedical image segmentation and spot detection of cDNA microarray images, based on generalized Gaussian mixture models, which are robust in the presence of noise and outliers and more flexible to adapt the shape of data.
Journal ArticleDOI

Low-Complexity Source Coding Using Gaussian Mixture Models, Lattice Vector Quantization, and Recursive Coding with Application to Speech Spectrum Quantization

TL;DR: The proposed scheme is shown to provide superior performance with moderate increase in complexity when compared with conventional one-step linear prediction based compression schemes for both narrow-band and wide-band speech.
References
More filters
Book

Vector Quantization and Signal Compression

TL;DR: The author explains the design and implementation of the Levinson-Durbin Algorithm, which automates the very labor-intensive and therefore time-heavy and expensive process of designing and implementing a Quantizer.
Journal ArticleDOI

Fast quantizing and decoding and algorithms for lattice quantizers and codes

TL;DR: A very fast algorithm is given for finding the closest lattice point to an arbitrary point if these lattices are used for vector quantizing of uniformly distributed data.
Journal ArticleDOI

On the structure of vector quantizers

TL;DR: Vector quantization is intrinsically superior to predictive coding, transform coding, and other suboptimal and {\em ad hoc} procedures since it achieves optimal rate distortion performance subject only to a constraint on memory or block length of the observable signal segment being encoded.
Journal ArticleDOI

A pyramid vector quantizer

TL;DR: Although suboptimum in a rate-distortion sense, because the PVQ can encode large-dimensional vectors, it offers significant reduction in rose distortion compared with the optimum Lloyd-Max scalar quantizer, and provides an attractive alternative to currently available vector quantizers.
Journal ArticleDOI

High-resolution quantization theory and the vector quantizer advantage

TL;DR: The authors consider how much performance advantage a fixed-dimensional vector quantizer can gain over a scalar quantizer and collects several results from high-resolution or asymptotic quantization theory to identify source and system characteristics that contribute to the vectorquantizer advantage.
Related Papers (5)