scispace - formally typeset
Journal ArticleDOI

Theory and practice of vector quantizers trained on small training sets

Reads0
Chats0
TLDR
The authors conclude that, by using training sets comprising only a small fraction of the available data, one can produce results that are close to the results obtainable when all available data are used.
Abstract
Examines how the performance of a memoryless vector quantizer changes as a function of its training set size. Specifically, the authors study how well the training set distortion predicts test distortion when the training set is a randomly drawn subset of blocks from the test or training image(s). Using the Vapnik-Chervonenkis (VC) dimension, the authors derive formal bounds for the difference of test and training distortion of vector quantizer codebooks. The authors then describe extensive empirical simulations that test these bounds for a variety of codebook sizes and vector dimensions, and give practical suggestions for determining the training set size necessary to achieve good generalization from a codebook. The authors conclude that, by using training sets comprising only a small fraction of the available data, one can produce results that are close to the results obtainable when all available data are used. >

read more

Citations
More filters
Journal ArticleDOI

Quantization

TL;DR: The key to a successful quantization is the selection of an error criterion – such as entropy and signal-to-noise ratio – and the development of optimal quantizers for this criterion.
Journal ArticleDOI

Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding

TL;DR: Rate of convergence results are established for vector quantization for memoryless real-valued sources with bounded support at transmission rate R.
Book ChapterDOI

Learning-theoretic methods in vector quantization

TL;DR: The principal goal of data compression is to replace data by a compact representation in such a manner that from this representation the original data can be reconstructed either perfectly, or with high enough accuracy.
Journal ArticleDOI

How tight are the Vapnik-Chervonenkis bounds?

TL;DR: It is found that, in some cases, the average generalization of neural networks trained on a variety of simple functions is significantly better than the VC bound: the approach to perfect performance is exponential in the number of examples m, rather than the 1/m result of the bound.
Journal ArticleDOI

On the training distortion of vector quantizers

TL;DR: Borders show that the training distortion can underestimate the minimum distortion of a truly optimal quantizer by as much as a constant times n/sup -1/2/, where n is the size of the training data.
References
More filters
Journal ArticleDOI

An Algorithm for Vector Quantizer Design

TL;DR: An efficient and intuitive algorithm is presented for the design of vector quantizers based either on a known probabilistic model or on a long training sequence of data.
Book

Vector Quantization and Signal Compression

TL;DR: The author explains the design and implementation of the Levinson-Durbin Algorithm, which automates the very labor-intensive and therefore time-heavy and expensive process of designing and implementing a Quantizer.
Book ChapterDOI

On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities

TL;DR: This chapter reproduces the English translation by B. Seckler of the paper by Vapnik and Chervonenkis in which they gave proofs for the innovative results they had obtained in a draft form in July 1966 and announced in 1968 in their note in Soviet Mathematics Doklady.
Journal Article

Vector quantization

TL;DR: During the past few years several design algorithms have been developed for a variety of vector quantizers and the performance of these codes has been studied for speech waveforms, speech linear predictive parameter vectors, images, and several simulated random processes.
Book

Estimation of Dependences Based on Empirical Data

TL;DR: In this article, the Big Picture of Inference: Direct Inference Instead of Generalization (INFI) instead of generalization (2000-2010) is presented. But this is not the case in this paper.
Related Papers (5)