About: Trellis quantization is a research topic. Over the lifetime, 1094 publications have been published within this topic receiving 18877 citations.
Papers published on a yearly basis
01 Nov 1985
TL;DR: This tutorial review presents the basic concepts employed in vector quantization and gives a realistic assessment of its benefits and costs when compared to scalar quantization, and focuses primarily on the coding of speech signals and parameters.
Abstract: Quantization, the process of approximating continuous-amplitude signals by digital (discrete-amplitude) signals, is an important aspect of data compression or coding, the field concerned with the reduction of the number of bits necessary to transmit or store analog data, subject to a distortion or fidelity criterion. The independent quantization of each signal value or parameter is termed scalar quantization, while the joint quantization of a block of parameters is termed block or vector quantization. This tutorial review presents the basic concepts employed in vector quantization and gives a realistic assessment of its benefits and costs when compared to scalar quantization. Vector quantization is presented as a process of redundancy removal that makes effective use of four interrelated properties of vector parameters: linear dependency (correlation), nonlinear dependency, shape of the probability density function (pdf), and vector dimensionality itself. In contrast, scalar quantization can utilize effectively only linear dependency and pdf shape. The basic concepts are illustrated by means of simple examples and the theoretical limits of vector quantizer performance are reviewed, based on results from rate-distortion theory. Practical issues relating to quantizer design, implementation, and performance in actual applications are explored. While many of the methods presented are quite general and can be used for the coding of arbitrary signals, this paper focuses primarily on the coding of speech signals and parameters.
08 Sep 1993
TL;DR: Here I show how to compute a matrix that is optimized for a particular image, and custom matrices for a number of images show clear improvement over image-independent matrices.
Abstract: This presentation describes how a vision model incorporating contrast sensitivity, contrast masking, and light adaptation is used to design visually optimal quantization matrices for Discrete Cosine Transform image compression. The Discrete Cosine Transform (DCT) underlies several image compression standards (JPEG, MPEG, H.261). The DCT is applied to 8x8 pixel blocks, and the resulting coefficients are quantized by division and rounding. The 8x8 'quantization matrix' of divisors determines the visual quality of the reconstructed image; the design of this matrix is left to the user. Since each DCT coefficient corresponds to a particular spatial frequency in a particular image region, each quantization error consists of a local increment or decrement in a particular frequency. After adjustments for contrast sensitivity, local light adaptation, and local contrast masking, this coefficient error can be converted to a just-noticeable-difference (jnd). The jnd's for different frequencies and image blocks can be pooled to yield a global perceptual error metric. With this metric, we can compute for each image the quantization matrix that minimizes the bit-rate for a given perceptual error, or perceptual error for a given bit-rate. Implementation of this system demonstrates its advantages over existing techniques. A unique feature of this scheme is that the quantization matrix is optimized for each individual image. This is compatible with the JPEG standard, which requires transmission of the quantization matrix.
TL;DR: The authors adopt the notions of signal set expansion, set partitioning, and branch labeling of TCM, but modify the techniques to account for the source distribution, to design TCQ coders of low complexity with excellent mean-squared-error (MSE) performance.
Abstract: Trellis-coded quantization (TCQ) is developed and applied to the encoding of memoryless and Gauss-Markov sources. The theoretical justification for the approach is alphabet-constrained rate distortion theory, which is a dual to the channel capacity argument that motivates trellis-coded modulation (TCM). The authors adopt the notions of signal set expansion, set partitioning, and branch labeling of TCM, but modify the techniques to account for the source distribution, to design TCQ coders of low complexity with excellent mean-squared-error (MSE) performance. For a memoryless uniform source, TCQ provides an MSE within 0.21 dB of the distortion-rate bound at all positive (integral) rates. The performance is superior to that promised by the coefficient of quantization for all of the best lattices known in dimensions 24 or less. For a memoryless Gaussian source, the TCQ performance at rates of 0.5, 1, and 2 b/sample is superior to all previous results the authors found in the literature. The encoding complexity of TCQ is very modest. TCQ is incorporated into a predictive coding structure for the encoding of Gauss-Markov sources. Simulation results for first-, second-, and third-order Gauss-Markov sources are presented. >
03 Jan 1992
TL;DR: In this paper, a digital video compression system and an apparatus implementing this system are disclosed, where matrices of pixels in the RGB signal format are converted into YUV representation, including a step of selectively sampling the chrominance components.
Abstract: A digital video compression system and an apparatus implementing this system are disclosed. Specifically, matrices of pixels in the RGB signal format are converted into YUV representation, including a step of selectively sampling the chrominance components. The signals are then subjected to a discrete cosine transform (DCT). A circuitry implementing the DCT in a pipelined architecture is provided. A quantization step eliminates DCT coefficients having amplitude below a set of preset thresholds. The video signal is further compressed by coding the elements of the quantized matrices in a zig-zag manner. This representation is further compressed by Huffman codes. Decompression of the signal is substantially the reverse of compression steps. The inverse discrete cosine transform (IDCT) may be implemented by the DCT circuit. Circuits for implementing RGB to YUV conversion, DCT, quantization, coding and their decompression counterparts are disclosed. The circuits may be implemented in the form an integrated circuit chip.
TL;DR: A new method for filling a color table is presented that produces pictures of similar quality as existing methods, but requires less memory and execution time.
Abstract: A new method for filling a color table is presented that produces pictures of similar quality as existing methods, but requires less memory and execution time. All colors of an image are inserted in an octree, and this octree is reduced from the leaves to the root in such a way that every pixel has a well defined maximum error. The algorithm is described in PASCAL notation.
Related Topics (5)