scispace - formally typeset
Search or ask a question

Showing papers on "Codebook published in 1999"


Patent
24 Aug 1999
TL;DR: In this paper, a method of encoding an input speech signal using a multi-rate encoder having a plurality of encoding rates is disclosed, where a high-pass filter and then a perceptual weighting filter are applied to such signal to generate a first target signal.
Abstract: A method of encoding an input speech signal using a multi-rate encoder having a plurality of encoding rates is disclosed. A high-pass filter and then a perceptual weighting filter are applied to such signal to generate a first target signal. An adaptive codebook vector is identified from an adaptive codebook using the first target signal by filtering the vector to generate a filtered adaptive codebook vector. An adaptive codebook gain for the adaptive codebook vector is calculated and an error signal minimized. The adaptive codebook gain is adaptively reduced based on one encoding rate from the plurality of encoding rates to generate a reduced adaptive codebook gain. A second target signal based at least on the first target signal and the reduced adaptive codebook gain is generated. The input speech signal is converted into an encoded speech based on the second target signal.

111 citations


Proceedings Article
01 Jan 1999
TL;DR: In this article, a spectral domain, speech enhancement algorithm is proposed based on a mixture model for the short time spectrum of the clean speech signal, and on a maximum assumption in the production of the noisy speech spectrum.
Abstract: We present a spectral domain, speech enhancement algorithm. The new algorithm is based on a mixture model for the short time spectrum of the clean speech signal, and on a maximum assumption in the production of the noisy speech spectrum. In the past this model was used in the context of noise robust speech recognition. In this paper we show that this model is also effective for improving the quality of speech signals corrupted by additive noise. The computational requirements of the algorithm can be significantly reduced, essentially without paying performance penalties, by incorporating a dual codebook scheme with tied variances. Experiments, using recorded speech signals and actual noise sources, show that in spite of its low computational requirements, the algorithm shows improved performance compared to alternative speech enhancement algorithms.

110 citations


Journal ArticleDOI
TL;DR: A new form of trellis coded quantization based on uniform quantization thresholds and "on-the-fly" quantizer training is presented, and a simple scheme to improve the perceptual performance of UTCQ for certain imagery at low bit rates is presented.
Abstract: A new form of trellis coded quantization based on uniform quantization thresholds and "on-the-fly" quantizer training is presented. The universal trellis coded quantization (UTCQ) technique requires neither stored codebooks nor a computationally intense codebook design algorithm. Its performance is comparable with that of fully optimized entropy-constrained trellis coded quantization (ECTCQ) for most encoding rates. The codebook and trellis geometry of UTCQ are symmetric with respect to the trellis superset. This allows sources with a symmetric probability density to be encoded with a single variable-rate code. Rate allocation and quantizer modeling procedures are given for UTCQ which allow access to continuous quantization rates. An image coding application based on adaptive wavelet coefficient subblock classification, arithmetic coding, and UTCQ is presented. The excellent performance of this coder demonstrates the efficacy of UTCQ. We also present a simple scheme to improve the perceptual performance of UTCQ for certain imagery at low bit rates. This scheme has the added advantage of being applied during image decoding, without the need to reencode the original image.

101 citations


Journal ArticleDOI
TL;DR: A direct waveform mean-shape vector quantization (MSVQ) is proposed here as an alternative for electrocardiographic (ECG) signal compression, leading to high compression ratios (CRs) while maintaining a low level of waveform distortion and preserving the main clinically interesting features of the ECG signals.
Abstract: A direct waveform mean-shape vector quantization (MSVQ) is proposed here as an alternative for electrocardiographic (ECG) signal compression. In this method, the mean values for short ECG signal segments are quantized as scalars and compression of the single-lead ECG by average beat substraction and residual differencing their waveshapes coded through a vector quantizer. An entropy encoder is applied to both, mean and vector codes, to further increase compression without degrading the quality of the reconstructed signals. In this paper, the fundamentals of MSVQ are discussed, along with various parameters specifications such as duration of signal segments, the wordlength of the mean-value quantization and the size of the vector codebook. The method is assessed through percent-residual-difference measures on reconstructed signals, whereas its computational complexity is analyzed considering its real-time implementation. As a result, MSVQ has been found to be an efficient compression method, leading to high compression ratios (CRs) while maintaining a low level of waveform distortion and, consequently, preserving the main clinically interesting features of the ECG signals. CRs in excess of 39 have been achieved, yielding low data rates of about 140 bps. This compression factor makes this technique especially attractive in the area of ambulatory monitoring.

79 citations


Patent
Yang Gao1
24 Aug 1999
TL;DR: In this article, a multi-rate speech codec supports a number of encoding bit rate modes by adaptively selecting encoding bits rate modes to match communication channel restrictions, and a variety of techniques are applied, many of which involve the classification of the input signal.
Abstract: A multi-rate speech codec supports a number of encoding bit rate modes by adaptively selecting encoding bit rate modes to match communication channel restrictions. In higher bit rate encoding modes, an accurate representation of speech through CELP (code-excited linear prediction) and other associated modeling parameters are generated for higher quality decoding and reproduction. To achieve high quality in high lower bit rate encoding modes, the speech encoder departs from the strict waveform matching criteria of regular CELP coders and strives to identify significant perceptual features of the input signal. To support lower bit rate encoding modes, a variety of techniques are applied, many of which involve the classification of the input signal. For each of the bit rate modes selected, a number of fixed or innovation sub-codebooks are selected in use in generating innovation vectors.

77 citations


Patent
Jes Thyssen1
24 Aug 1999

60 citations


Proceedings ArticleDOI
TL;DR: It is demonstrated how a small set of codebook vectors, extracted from a learning vector quantizer, can be used to estimate the class-conditional densities of the low-level observed feature needed for the Bayesian methodology.
Abstract: Developing semantic indices into large image databases is a challenging and important problem in content-based image retrieval We address the problem of detecting objects in an image based on color and texture features Specifically, we consider the following two problems of detecting sky and vegetation in outdoor images An image is divided into 16 X 16 sub-blocks and color, texture, and position features are extracted form every sub-block We demonstrate how a small set of codebook vectors, extracted from every sub- block We demonstrate how a small set of codebook vectors, extracted from a learning vector quantizer, can be used to estimate the class-conditional densities of the low-level observed feature needed for the Bayesian methodology The sky and vegetation detectors have been trained on over 400 color images from the Corel database We achieve classification accuracies of over 94 percent for both the classifiers on the training data We are currently extending our evaluation to a larger database of 1,700 images

52 citations


Patent
11 Mar 1999
TL;DR: In this paper, a system and method for synthesizing a facial image, compares a speech frame from an incoming speech signal with acoustic features stored within visually similar entries in an audio-visual codebook to produce a set of weights.
Abstract: A system and method for synthesizing a facial image, compares a speech frame from an incoming speech signal with acoustic features stored within visually similar entries in an audio-visual codebook to produce a set of weights. The audio-visual codebook also stores visual features corresponding to the acoustic features. A composite visual feature is generated as a weighted sum of the corresponding visual features, from which the facial image is synthesized. The audio-visual codebook may include multiple samples of the acoustic and visual features for each entry, which corresponds to a sequence of one or more phonemes.

50 citations


Patent
15 Nov 1999
TL;DR: In this article, a random code vector reading section was replaced with an oscillator for outputting different vector streams in accordance with values of input seeds, and a seed storage section for storing a pluralitty of seeds.
Abstract: A random code vector reading section and a random codebook of a conventional CELP type speech coder/decoder are respectively replaced with an oscillator for outputting different vector streams in accordance with values of input seeds, and a seed storage section for storing a pluralitty of seeds This makes it unnecessary to store fixed vectors as they are in a fixed codebook (ROM) thereby considerably reducing the memory capacity.

46 citations


Journal ArticleDOI
TL;DR: The wave-based matched-pursuits algorithm is used to develop a codebook of features that are representative of time-domain scattering from a target of interest, accounting for the variability of such as a function of target-sensor orientation.
Abstract: The method of matched pursuits is an algorithm by which a waveform is parsed into its fundamental constituents here, in the context of short-pulse electromagnetic scattering, wavefronts, and resonances (constituting what we have called wave-based matched pursuits). The wave-based matched-pursuits algorithm is used to develop a codebook of features that are representative of time-domain scattering from a target of interest, accounting for the variability of such as a function of target-sensor orientation. This codebook is subsequently used in the context of a hidden Markov model (HMM) in which the probability of measuring a particular codebook element is quantified as a function of target-sensor orientation. We review the wave-based matched-pursuits algorithm and its use in the context of an HMM (for target identification). Finally, this new wave-based signal processing algorithm is demonstrated with simulated scattering data, with additive noise.

38 citations


Proceedings ArticleDOI
24 Oct 1999
TL;DR: An algorithm for automatic image orientation estimation using a Bayesian learning framework is presented and it is shown how feature clustering can be used as a feature selection mechanism to remove redundancies in the high-dimensional feature vectors used for classification.
Abstract: We present an algorithm for automatic image orientation estimation using a Bayesian learning framework. We demonstrate that a small codebook (the optimal size of codebook is selected using a modified MDL criterion) extracted from a vector quantizer can be used to estimate the class-conditional densities of the observed features needed for the Bayesian methodology. We further show how feature clustering can be used as a feature selection mechanism to remove redundancies in the high-dimensional feature vectors used for classification. Experiments on a database of 17,901 images have shown that our proposed algorithm achieves an accuracy of approximately 97% on the training set and over 89% on an independent test set.

Proceedings ArticleDOI
01 Jan 1999
TL;DR: This paper proposes a simple and effective method to improve the image quality of the BPM in each phase, which generates a tree-structured codebook to describe the contents of the pixels of the transmitted image.
Abstract: The bit-plane method (BPM) is the simplest way to implement the progressive image transmission (PIT) system. In this paper, we propose a simple and effective method to improve the image quality of the BPM in each phase. This method generates a tree-structured codebook to describe the contents of the pixels of the transmitted image. By transmitting this tree structured codebook to the receiver level by level, we can increase the image quality of the BPM in each phase. The experimental results show that the image quality of our scheme turns out better than that of the BPM. Moreover, our scheme can better impress the human visual system in an earlier phase than the BPM.

Journal ArticleDOI
TL;DR: An adaptive vector quantizer using a clustering technique known as adaptive fuzzy leader clustering (AFLC) that is similar in concept to deterministic annealing (DA) for VQ codebook design has been developed and exhibits much better performance than the above techniques.
Abstract: An adaptive vector quantizer (VQ) using a clustering technique known as adaptive fuzzy leader clustering (AFLC) that is similar in concept to deterministic annealing (DA) for VQ codebook design has been developed. This vector quantizer, AFLC-VQ, has been designed to vector quantize wavelet decomposed sub images with optimal bit allocation. The high-resolution sub images at each level have been statistically analyzed to conform to generalized Gaussian probability distributions by selecting the optimal number of filter taps. The adaptive characteristics of AFLC-VQ result from AFLC, an algorithm that uses self-organizing neural networks with fuzzy membership values of the input samples for upgrading the cluster centroids based on well known optimization criteria. By gen- erating codebooks containing codewords of varying bits, AFLC-VQ is capable of compressing large color/monochrome medical images at extremely low bit rates (0.1 bpp and less) and yet yielding high fidelity reconstructed images. The quality of the reconstructed im- ages formed by AFLC-VQ has been compared with JPEG and EZW, the standard and the well known wavelet based compression tech- nique (using scalar quantization), respectively, in terms of statistical performance criteria as well as visual perception. AFLC-VQ exhibits much better performance than the above techniques. JPEG and EZW were chosen as comparative benchmarks since these have been used in radiographic image compression. The superior perfor- mance of AFLC-VQ over LBG-VQ has been reported in earlier pa- pers. © 1999 SPIE and IS&T. (S1017-9909(99)01301-X)

Patent
James P. Ashley1, Weimin Peng1
24 Aug 1999
TL;DR: In this paper, position combinations among two or more pulses (403) are implemented to achieve high quality speech reconstruction at low bit rates, and certain combinations of pulses are prohibited which allows the most significant pulses to always be coded, thereby improving speech quality.
Abstract: To achieve high quality speech reconstruction at low bit rates, constraints on position combinations among two or more pulses (403) are implemented. By placing constraints on position combinations, certain combinations of pulses are prohibited which allows the most significant pulses to always be coded, thereby improving speech quality. After all valid combinations are considered, a list of pulse pairs (codebook) which can be indexed using a single, predetermined bit length codeword is produced. The codeword is transmitted to a destination where it is used by a decoder to reconstruct the original information signal.

Patent
26 Feb 1999
TL;DR: In this article, a method and apparatus for compressing video data images using optical processing techniques is described. But it is not shown how to apply holographic optical correlation in a feedback loop.
Abstract: A method and apparatus for compressing video data images uses optical processing techniques. The method and apparatus perform holographic optical correlation and apply holographic optical correlation in a feedback loop. A codebook of images or primitives for the correlation are stored in a holographic library.

PatentDOI
TL;DR: In this paper, a pitch search method and device for digitally encoding a wideband signal, in particular but not exclusively a speech signal, was proposed, in view of transmitting, or storing, and synthesizing this wideband sound signal.
Abstract: A pitch search method and device for digitally encoding a wideband signal, in particular but not exclusively a speech signal, in view of transmitting, or storing, and synthesizing this wideband sound signal The new method and device which achieve efficient modeling of the harmonic structure of the speech spectrum uses several forms of low pass filters applied to a pitch codevector, the one yielding higher prediction gain (ie the lowest pitch prediction error) is selected and the associated pitch codebook parameters are forwarded

Proceedings ArticleDOI
24 Oct 1999
TL;DR: This paper develops an incremental learning paradigm for Bayesian classification of images that estimates the already learnt training samples from the existing codebook vectors and augments these to the new training set for re-training the classifier.
Abstract: Grouping images into (semantically) meaningful categories using low-level visual features is a challenging and important problem in content-based image retrieval. In this paper, we develop an incremental learning paradigm for Bayesian classification of images. Under the Bayesian paradigm, the class-conditional densities are represented in terms of codebook vectors. Learning is thus incrementally updating these codebook vectors as new training data become available. The proposed learning scheme estimates the already learnt training samples from the existing codebook vectors and augments these to the new training set for re-training the classifier. The above paradigm is shown to yield good results on three complex image classification problems. A classifier trained incrementally has comparable accuracies to the one which is trained using the true training samples.

Proceedings ArticleDOI
Nam Ha1
15 Mar 1999
TL;DR: This paper proposes a fast search method of algebraic codebook in CELP coders that reduces the computations considerably compared with G.729 at the expense of a slight degradation of speech quality, and gives better speech quality with smaller average search space thanG.729A.
Abstract: This paper proposes a fast search method of algebraic codebook in CELP coders. In the proposed method, the sequence of codebook search is reordered according to the criterion of mean squared weighted error between target vector and filtered adaptive codebook vector, and the algebraic codebook is searched until a predetermined threshold is satisfied. This method reduces the computations considerably compared with G.729 at the expense of a slight degradation of speech quality. Moreover, it gives better speech quality with smaller average search space than G.729A.

09 Apr 1999
TL;DR: This work presents a way to transform the original algorithm used in deriving the codebook by reducing the matrix as well as by ensuring the selection of minimum number of symptoms required to uniquely identify each problem in the codebooks.
Abstract: As the size of networks increases, real-time fault management becomes difficult due to the volume of traffic. A single problem can generate numerous symptoms, which are received as events by a network management system. These events could be correlated to deduce the source of the problem. One of the correlation techniques used is codebook approach, developed by Yemini et. al. Codebook is a matrix relating problems with symptoms. We present a way to transform the original algorithm used in deriving the codebook. Our algorithm improves the efficiency by reducing the matrix as well as by ensuring the selection of minimum number of symptoms required to uniquely identify each problem in the codebook. This avoids an exponential growth in the number of symptoms as number of problems increase, which in turn shows up as saving in real-time processing.

Patent
29 Nov 1999
TL;DR: In this article, the adaptive codebook circuit is input with the perceptual weighting signal x'w(n), the past excitation signal v(n) output from the gain quantization circuit, the perceptual weighted impulse response hw n output from impulse response calculation circuit, and the pitch cycle Top from the pitch calculation circuit.
Abstract: In this speech encoding system, the limiter circuit is input with the delay of adaptive codebook obtained for the previous subframe, and the pitch cycle search range is limited so that the delay of adaptive codebook obtained for the previous subframe is not discontinuous to the delay of adaptive codebook to be obtained for the current subframe, and the pitch cycle search range limited is output to the pitch calculation circuit. The pitch calculation circuit is input with output signal Xw(n) of the perceptual weighting circuit and the pitch cycle search range output from the limiter, calculating the pitch cycle Top, then outputting at least one pitch cycle Top to the adaptive codebook circuit. The adaptive codebook circuit is input with the perceptual weighting signal x'w(n), the past excitation signal v(n) output from the gain quantization circuit, the perceptual weighting impulse response hw(n) output from the impulse response calculation circuit, and the pitch cycle Top from the pitch calculation circuit, searching near the pitch cycle, calculating the delay of adaptive codebook. With the above composition, the delay of adaptive codebook obtained for each subframe can be prevented from being discontinuous in the process of time.

Patent
08 Jun 1999
TL;DR: In this article, the authors propose to store sub-excitation vectors with different characteristics in respective subcodebooks, and correspond to input signals with various characteristics, and achieve excellent sound qualities at the time of decoding.
Abstract: First codebook (61) and second codebook (62) respectively have two subcodebooks, and in respective codebooks, addition sections (66) and (67) obtain respective excitation vectors by adding sub-excitation vectors fetched from respective two subcodebooks. Addition section (68) obtains an excitation sample by adding those excitation vectors. According to the aforementioned constitution, it is possible to store sub-excitation vectors with different characteristics in respective subcodebooks. Therefore, it is possible to correspond to input signals with various characteristics, and achieve excellent sound qualities at the time of decoding.

Journal ArticleDOI
TL;DR: Soft centroids method is proposed for binary vector quantizer design, where the codevectors can take any real value between one and zero during the codebook generation process.
Abstract: Soft centroids method is proposed for binary vector quantizer design. Instead of using binary centroids, the codevectors can take any real value between one and zero during the codebook generation process. The binarization is performed only for the final codebook. The proposed method is successfully applied for three existing codebook generation algorithms: GLA, SA and PNN.

Patent
Kazunori Ozawa1
11 May 1999
TL;DR: A speech coding apparatus includes a spectrum parameter calculation section, an adaptive codebook, a sound source quantization section, a discrimination section, and a multiple-xer section.
Abstract: A speech coding apparatus includes a spectrum parameter calculation section, an adaptive codebook section, a sound source quantization section, a discrimination section, and a multiplexer section. The spectrum parameter calculation section receives a speech signal and quantizes a spectrum parameter. The adaptive codebook section obtains a delay and a gain from a past quantized sound source signal using an adaptive codebook, and obtains a residue by predicting a speech signal. The sound source quantization section quantizes a sound source signal using the spectrum parameter. The discrimination section discriminates the mode. The sound source quantization section has a codebook for representing a sound source signal by a combination of non-zero pulses and collectively quantizing amplitudes or polarities of the pulses in a predetermined mode, and searches combinations of code vectors and shift amounts used to shift the positions of the pulses to output a combination of a code vector and shift amount which minimizes distortion relative to input speech. The multiplexer section outputs a combination of outputs from the spectrum parameter calculation section, the adaptive codebook section, and the sound source quantization section.

Proceedings ArticleDOI
20 Jun 1999
TL;DR: Extensions of high rate theory provides the formulas to calculate estimates of optimal performance in terms of spectral distortion (SD) for first order time-recursive spectrum coders and also tells us how to design coders with optimal VQ point density.
Abstract: Estimates of optimal performance in terms of spectral distortion (SD) for first order time-recursive spectrum coders are presented. Extensions of high rate theory provides us with the formulas to calculate estimates and also tells us how to design coders with optimal VQ point density. For this purpose, the PDF of the current spectrum parameter vector, given the previous, is needed. This conditional PDF is obtained analytically from a model PDF for pairs of consecutive parameter vectors, based on Gaussian mixtures. The theory gives a lower bound of 16 bits to achieve 1 dB SD. Practical coders must base the adaptive codebook design on quantized previous vectors and experiments suggest that another 2-3 bits is needed to achieve 1 dB SD. Informal subjective tests indicate that transparent quality may be maintained at even lower rates.

Patent
Anders Uvliden1, Jonas Svedberg1
24 Aug 1999
TL;DR: In this article, a multi-codebook fixed bitrate CELP signal block encoder/decoder includes a codebook selector for selecting, for each signal block, a corresponding codebook identification in accordance with a deterministic selection procedure that is independent of signal type.
Abstract: A multi-codebook fixed bitrate CELP signal block encoder/decoder includes a codebook selector (22) for selecting, for each signal block, a corresponding codebook identification in accordance with a deterministic selection procedure that is independent of signal type Included are also means for encoding/decoding each signal block by using a codebook having the selected codebook identification

Patent
Cheng-Chieh Lee1
14 Dec 1999
TL;DR: In this paper, a Trellis-Based Scalar Vector Quantizer (TB-SVQ) is applied to a set of digital data to set up a codebook boundary and to obtain a non-uniform density gain for a constellation in which the data signals will be encoded.
Abstract: Methods of designing successively refinable Trellis-Based Scalar-Vector quantizers (TB-SVQ) include a multi-stage process wherein a TB-SVQ is applied to a set of digital data to set up a codebook boundary and to obtain a non-uniform density gain for a constellation in which the data signals will be encoded. In at least one more stage, a Trellis coded quantizer (TCQ) is applied to the output codebook boundary of the first stage to obtain a granular or shaping gain of 1.53 dB. The inventive methods successively refine the TB-SVQ so that robust signal transmission is achieved. By applying a multi-stage process wherein a TB-SVQ is utilized in the first stage and a TCQ is utilized in the second and successive stages, the computational complexity and time for encoding the constellation are greatly reduced.

Proceedings ArticleDOI
15 Mar 1999
TL;DR: A method for adaptively allocating of pulse position candidates using an adaptive code vector for the adaptation of CELP coders using pulse codebooks for excitations such as ACELP is described.
Abstract: CELP coders using pulse codebooks for excitations such as ACELP have the advantages of low complexity and high speech quality. At low bit rates, however, the decrease of pulse position candidates and the number of pulses degrades reconstructed speech quality. This paper describes a method for adaptively allocating of pulse position candidates. In the proposed method, N efficient candidates of pulse positions are selected out of all possible positions in a subframe. The amplitude envelope of an adaptive code vector is used for selecting N efficient candidates. The larger the amplitude is, the more pulse positions are assigned. Using an adaptive code vector for the adaptation, the proposed method requires no additional bits for the adaptation. Experimental results show that the proposed method increases WSNRseg by 0.3 dB and MOS by 0.15.

Journal ArticleDOI
TL;DR: A parallel approach using the competitive continuous Hopfield neural network (CCHNN) is proposed for the vector quantization in image compression, showing more promising results after convergence than the generalized Lloyd algorithm.

Proceedings ArticleDOI
15 Mar 1999
TL;DR: This paper improves the speaker recognition rates of a MLP classifier and LPCC codebook alone using a linear combination between both methods, and proposes an efficient algorithm that reduces the computational complexity of the LPCC-VQ system by a factor of 4.
Abstract: This paper improves the speaker recognition rates of a MLP classifier and LPCC codebook alone, using a linear combination between both methods. In simulations we have obtained an improvement of 4.7% over a LPCC codebook of 32 vectors and 1.5% for a codebook of 128 vectors (error rate drops from 3.68% to 2.1%). Also we propose an efficient algorithm that reduces the computational complexity of the LPCC-VQ system by a factor of 4.

Journal ArticleDOI
TL;DR: The variable-length constrained storage tree-structured vector quantization (VLCS-TSVQ) algorithm presented in this paper utilizes the codebook sharing by multiple vector sources concept as in CSVQ to greedily grow an unbalanced tree structured residual vector quantizer with constrained storage.
Abstract: Constrained storage vector quantization, (CSVQ), introduced by Chan and Gersho (1990, 1991) allows for the stagewise design of balanced tree-structured residual vector quantization codebooks with low encoding and storage complexities On the other hand, it has been established by Makhoul et al (1985), Riskin et al (1991), and by Mahesh et al (see IEEE Trans Inform Theory, vol41, p917-30, 1995) that variable-length tree-structured vector quantizer (VLTSVQ) yields better coding performance than a balanced tree-structured vector quantizer and may even outperform a full-search vector quantizer due to the nonuniform distribution of rate among the subsets of its input space The variable-length constrained storage tree-structured vector quantization (VLCS-TSVQ) algorithm presented in this paper utilizes the codebook sharing by multiple vector sources concept as in CSVQ to greedily grow an unbalanced tree structured residual vector quantizer with constrained storage It is demonstrated by simulations on test sets from various synthetic one dimensional (1-D) sources and real-world images that the performance of VLCS-TSVQ, whose codebook storage complexity varies linearly with rate, can come very close to the performance of greedy growth VLTSVQ of Riskin et al and Mahesh et al The dramatically reduced size of the overall codebook allows the transmission of the code vector probabilities as side information for source adaptive entropy coding