scispace - formally typeset
Search or ask a question

Showing papers on "Codebook published in 1989"


Journal ArticleDOI
W.H. Equitz1
TL;DR: The pairwise nearest neighbor (PNN) algorithm is presented as an alternative to the Linde-Buzo-Gray (1980, LBG) (generalized Lloyd, 1982) algorithm for vector quantization clustering.
Abstract: The pairwise nearest neighbor (PNN) algorithm is presented as an alternative to the Linde-Buzo-Gray (1980, LBG) (generalized Lloyd, 1982) algorithm for vector quantization clustering. The PNN algorithm derives a vector quantization codebook in a diminishingly small fraction of the time previously required, without sacrificing performance. In addition, the time needed to generate a codebook grows only O(N log N) in training set size and is independent of the number of code words desired. Using this method, one can either minimize the number of code words needed subject to a maximum rate. The PNN algorithm can be used with squared error and weighted squared error distortion measure. Simulations on a variety of images encoded at 1/2 b/pixel indicate that PNN codebooks can be developed in roughly 5% of the time required by the LBG algorithm. >

456 citations


Proceedings ArticleDOI
23 May 1989
TL;DR: The authors present a 16-band subband coder arranged as four equal-width subbands in each dimension, which uses an empirically derived perceptual masking model, to set noise-level targets not only for each subband but also for each pixel in a given subband.
Abstract: The authors present a 16-band subband coder arranged as four equal-width subbands in each dimension, It uses an empirically derived perceptual masking model, to set noise-level targets not only for each subband but also for each pixel in a given subband. The noise-level target is used to set the quantization levels in a DPCM (differential pulse code modulation) quantizer. The output from the DPCM quantizer is then encoded, using an entropy-based coding scheme, in either 1*1, 1*2, or 2*2 pixel blocks. The type of encoding depends on the statistics in each 4*4 subblock of a particular subband. One set of codebooks, consisting of less than 100000 entries, is used for all images, and the codebook subset used for any given image is dependent on the distribution of the quantizer outputs for that image. A block elimination algorithm takes advantage of the peaky spatial energy distribution of subbands to avoid using bits for quiescent parts of a given subband. Using this system, high-quality output is obtainable at bit rates from 0.1 to 0.9 bits/pixel, and nearly transparent quality requires 0.3 to 1.5 bits/pixel. >

284 citations


PatentDOI
TL;DR: In this paper, a 26-bit spectrum filter coding scheme was used to jointly optimize pitch and gain parameter sets in a speech codec operating at low data rates using an iterative method, where the number of bits allocated to the pitch and excitation signals depend on whether the signals are significant or not.
Abstract: A speech codec operating at low data rates uses an iterative method to jointly optimize pitch and gain parameter sets. A 26-bit spectrum filter coding scheme may be used, involving successive subtractions and quantizations. The codec may preferably use a decomposed multipulse excitation model, wherein the multipulse vectors used as the excitation signal are decomposed into position and amplitude codewords. Multipulse vectors are coded by comparing each vector to a reference multipulse vector and quantizing the resulting difference vector. An expanded multipulse excitation codebook and associated fast search method, optionally with a dynamically-weighted distortion measure, allow selection of the best excitation vector without memory or computational overload. In a dynamic bit allocation technique, the number of bits allocated to the pitch and excitation signals depend on whether the signals are "significant" or "insignificant". Silence/speech detection is based on an average signal energy over an interval and a minimum average energy over a predetermined number of intervals. Adaptive post-filter and the automatic gain control schemes are also provided. Interpolation is used for spectrum filter smoothing, and an algorithm is provided for ensuring stability of the spectrum filter. Specially designed scalar quantizers are provided for the pitch gain and excitation gain.

110 citations


Proceedings ArticleDOI
23 May 1989
TL;DR: An automatic speaker adaptation method for speech recognition in which a small amount of training material of unspecified text can be used and results of recognition experiments indicate that the proposed adaptation method is highly effective.
Abstract: An automatic speaker adaptation method is proposed for speech recognition in which a small amount of training material of unspecified text can be used. This method is easily applicable to vector-quantization-based speech recognition systems where each word is represented as multiple sequences of codebook entries. In the adaptation algorithm, either the codebook is modified for each new speaker or input speech spectra are adapted to the codebook, thereby using codebook sequences universally for all speakers. The important feature of this algorithm is that a set of spectra in training frames and the codebook entries are clustered hierarchically. Based on the deviation vectors between centroids of the training frame clusters and the corresponding codebook clusters, adaptation is performed hierarchically from small to large numbers of clusters. Results of recognition experiments indicate that the proposed adaptation method is highly effective. Possible variations using this method are presented. >

92 citations


PatentDOI
David L. Thomson1
TL;DR: A harmonic speech coding arrangement where vector quantization is used to improve speech quality is described in this article, where scaled vectors can be added into the magnitude and phase spectra for use at the synthesizer in generating speech as a sum of sinusoids.
Abstract: A harmonic speech coding arrangement where vector quantization is used to improve speech quality Parameters are determined at the analyzer (120) of an illustrative coding arrangement to model the magnitude and phase spectra of the input speech A first codebook of vectors is searched for a vector that closely approximates the difference between the true and estimated magnitude spectra A second codebook of vectors is searched for a vector that closely approximates the difference between the true and the estimated phase spectra Indices and scaling factors for the vectors are communicated to the synthesizer (160) such that scaled vectors can be added into the magnitude and phase spectra for use at the synthesizer in generating speech as a sum of sinusoids

89 citations


PatentDOI
TL;DR: In this paper, an unvoiced speech performance was improved in low-rate multi-pulse coders by employing a simple architecture with an output quality comparable to code excited linear predictive (CELP) coding.
Abstract: Improved unvoiced speech performance in low-rate multi-pulse coders is achieved by employing a multi-pulse architecture that is simple in implementation but with an output quality comparable to code excited linear predictive (CELP) coding. A hybrid architecture is provided in which a stochastic excitation model that is used during unvoiced speech is also capable of modeling voiced speech by use of random codebook excitation. A modified method for calculating the gain during stochastic excitation is also provided.

73 citations


Journal ArticleDOI
TL;DR: It is shown that the computational complexity of this algorithm can be reduced further by ordering the codevectors according to the sizes of their corresponding clusters.
Abstract: Recently, C.D. Bei and R.M. Gray (1985) used a partial distance search algorithm that reduces the computational complexity of the minimum distortion encoding for vector quantization. The effect of ordering the codevectors on the computational complexity of the algorithm is studied. It is shown that the computational complexity of this algorithm can be reduced further by ordering the codevectors according to the sizes of their corresponding clusters. >

69 citations


Proceedings ArticleDOI
23 May 1989
TL;DR: Convergence of the algorithm to the globally optimal codebook in finite time is proved, and experimental results indicate that the proposed algorithm obtains the best known codebook for the experimental situation described by R.M. Gray and E.D. Karnin.
Abstract: The authors present an algorithm for the generation of codebooks from a set of training vectors using simulated annealing. Convergence of the algorithm to the globally optimal codebook in finite time is proved, and experimental results comparing simulated annealing with Lloyd algorithms for image quantization are presented. The experimental results indicate that the proposed algorithm obtains the best known codebook for the experimental situation described by R.M. Gray and E.D. Karnin (IEEE Trans. on Inf. Theory, vol.IT-28, no.2, p.256-61, Mar. 1982). It has also been demonstrated that this technique works well for the construction of codebooks from real image data. >

48 citations


PatentDOI
TL;DR: A noise reduction system used for transmission and/or recognition of speech includes a speech analyzer for analyzing a noisy speech input signal thereby converting the speech signal into feature vectors such as autocorrelation coefficients, and a neural network for receiving the feature vectors of the noisy speech signal as its input.
Abstract: A noise reduction system used for transmission and/or recognition of speech includes a speech analyzer for analyzing a noisy speech input signal thereby converting the speech signal into feature vectors such as autocorrelation coefficients, and a neural network for receiving the feature vectors of the noisy speech signal as its input. The neural network extracts from a codebook an index of prototype vectors corresponding to a noise-free equivalent to the noisy speech input signal. Feature vectors of speech are read out from the codebook on the basis of the index delivered as an output from the neural network, thereby causing the speech input to be reproduced on the basis of the feature vectors of speech read out from the codebook.

46 citations


Journal ArticleDOI
TL;DR: A practical high-throughput architecture and its implementation for real-time coding of television-quality signals are presented and the architecture is directed toward the implementation of multistage vector quantization (VQ), as the authors' simulation results show that the latter is more suitable for real -time coding.
Abstract: A practical high-throughput architecture and its implementation for real-time coding of television-quality signals are presented. The architecture is directed toward the implementation of multistage vector quantization (VQ), as the authors' simulation results show that the latter is more suitable for real-time coding. However, the implementation is suitable for both single-stage and multistage VQ. The functional blocks of the VQ encoder system have been designed and implemented in VLSI technology. The VQ encoding scheme designed has an encoding delay of 25 clock cycles and is independent of the codebook size. >

44 citations


Patent
29 Nov 1989
TL;DR: In this paper, the first channel makes a full search of stored vectors in the codebook for a best match and outputs the index m.sup(1) of the best match.
Abstract: Blocks of an image or voice input signal are decimated by a selected factor d, (e.g., d=2) and distributed through a plurality (d2) of ordered channels for vector quantization coding using a codebook in which vectors are ordered, such as by their average intensity. The first channel makes a full search of stored vectors in the codebook for a best match and outputs the index m.sup.(1) of the best match. The second channel makes a partial search for a best match over a localized region of the codebook around the index m.sup.(1) and outputs the index m.sup.(2) of the best match. The subsequent channels make partial searches over a smaller localized region of the codebook around an index that is a function of the indices m.sup.(1) and m.sup.(2). At the decoder, the indices m.sup.(1), m.sup.(2), m.sup.(3) and m.sup.(4) are used to look up vectors in a codebook identical to the coder codebook. These vectors are then assembled by a process that is the inverse of the decimation and distribution process at the encoder to output a decoded signal that is high quality replica of the input signal. The narrow search ranges in the channels following the first reduce the encoding search time and bit rate for each of the input blocks. That range may be readily changed for each channel, and therefore may be made adaptive.

Journal ArticleDOI
TL;DR: An easily implementable stochastic relaxation algorithm for vector quantisation design is given, which generalises the usual Lloyd iteration in codebook design by perturbing the computed entroids with independent multidimensional noise, whose variance diminishes as the algorithm progresses.
Abstract: An easily implementable stochastic relaxation algorithm for vector quantisation design is given. It generalises the usual Lloyd iteration in codebook design by perturbing the computed entroids with independent multidimensional noise, whose variance diminishes as the algorithm progresses. A significant improvement is often achieved.

Journal ArticleDOI
TL;DR: An automatic speaker adaptation algorithm for speech recognition, in which a small amount of training material of unspecified text can be used, which reduces the mean word recognition error rate from 4.9 to 2.9%.
Abstract: The author proposes an automatic speaker adaptation algorithm for speech recognition, in which a small amount of training material of unspecified text can be used. The algorithm is easily applied to vector-quantization- (VQ) speech recognition systems consisting of a VQ codebook and a word dictionary in which each word is represented as a sequence of codebook entries. In the adaptation algorithm, the VQ codebook is modified for each new speaker, whereas the word dictionary is universally used for all speakers. The important feature of this algorithm is that a set of spectra in training frames and the codebook entries are clustered hierarchically. Based on the vectors representing deviation between centroids of the training frame clusters and the corresponding codebook clusters, adaptation is performed hierarchically from small to large numbers of clusters. The spectral resolution of the adaptation process is improved accordingly. Results of recognition experiments using utterances of 100 Japanese city names show that adaptation reduces the mean word recognition error rate from 4.9 to 2.9%. Since the error rate for speaker-dependent recognition is 2.2%, the adaptation method is highly effective. >

Proceedings ArticleDOI
23 May 1989
TL;DR: A semicontinuous hidden Markov model is proposed, which can be considered as a special form of continuous-mixture HMM with the continuous output probability density functions sharing in a mixture Gaussian density codebook, which leads to a unified modeling approach to vector quantization andhidden Markov modeling of speech signals.
Abstract: A semicontinuous hidden Markov model (HMM), which can be considered as a special form of continuous-mixture HMM with the continuous output probability density functions sharing in a mixture Gaussian density codebook, is proposed. The semicontinuous output probability density function is represented by a combination of the discrete output probabilities of the model and the continuous Gaussian density functions of a mixture Gaussian density codebook. The amount of training data required, as well as the computational complexity of the semicontinuous HMM, can be reduced in comparison to the continuous-mixture HMM. Parameters of the codebook and HMM can be mutually optimized to achieve an optimal model/codebook combination, which leads to a unified modeling approach to vector quantization and hidden Markov modeling of speech signals. Experimental results are included which show that the recognition accuracy of the semicontinuous HMM is measurably higher than those of both the discrete and the continuous HMM. >

Patent
Claude Galand1, Jean Menez1, Michele Rosso1
13 Oct 1989
TL;DR: In this article, a signal vector quantizing coder (CELP) is provided with an adaptive codebook originally loaded with preselected codewords, and the codebook is split into a fixed contents portion and a fixed length adaptive contents portion.
Abstract: A signal vector quantizing coder (CELP) is provided with an adaptive codebook originally loaded with preselected codewords. The codebook is split into a fixed contents portion and a fixed length adaptive contents portion. During coding operations, the codewords dynamically selected for coding the coder input signal are shifted into the fixed length adaptive codebook section for codebook contents updating purposes.

01 Feb 1989
TL;DR: With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks and the ability to manipulate tree structured codebooks, coupled with parallelism and pipelining permits searches in as short as O (log N) cycles.
Abstract: The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.

PatentDOI
TL;DR: A method of encoding speech sounds to facilitate their transmission to and reconstruction at a remote receiver and derives the same combination of filtration parameters which the transmitter applied to its filter while selecting the codebook vector corresponding to the transmitted index.
Abstract: A method of encoding speech sounds to facilitate their transmission to and reconstruction at a remote receiver. A transmitter and a receiver have identical filters and identical codebooks containing prestored excitation vectors which model quantized speech sound vectors. The speech sound vectors are compared with filtered versions of the codebook vectors. The filtered vector closest to each speech sound vector is selected. During the comparison, filtration parameters derived by backward predictive analysis of a series of previously selected filtered codebook vectors are applied to the filter. The transmitter sends the receiver an index representative of the location of the selected vector within the codebook. The receiver uses the index to recover the selected vector from its codebook, and passes the recovered vector through its filter to yield an output signal which reproduces the original speech sound sample. By applying the same backward predictive analysis technique employed by the transmitter to the same series of previously selected filtered codebook vectors to which the transmitter applied the technique, the receiver derives the same combination of filtration parameters which the transmitter applied to its filter while selecting the codebook vector corresponding to the transmitted index.

Journal ArticleDOI
01 Dec 1989
TL;DR: In this paper, a fast search algorithm for vector quantization (VQ)-based recognition of isolated words is presented, which incorporates the property of high correlation between speech feature vectors of consecutive frames with the method of triangular inequality elimination to relieve the computational burden of vector-quantising the test feature vectors by full codebook search, and uses the extended partial distortion method to compress the incomplete matching computations of wildly mismatched words.
Abstract: This paper presents a fast search algorithm for vector quantisation (VQ)-based recognition of isolated words. It incorporates the property of high correlation between speech feature vectors of consecutive frames with the method of triangular inequality elimination to relieve the computational burden of vector-quantising the test feature vectors by full codebook search, and uses the extended partial distortion method to compress the incomplete matching computations of wildly mismatched words. Overall computational load can therefore be drastically reduced while the recognition performance of full search can be retained. Experimental results show that about 93% of multiplications and additions can be saved with a little increase of both comparisons and memory space.

Proceedings ArticleDOI
23 May 1989
TL;DR: Novel fast optimal algorithms for finding the best sequence in this Barnes-Wall shell innovation codebook makes it possible to design a CELP coder at 9.6 kb/s with good quality and still implementable on a current digital-signal-processing chip.
Abstract: The authors present an algebraic code-excited linear prediction (CELP) speech coder where the innovation codebook comes from the first spherical code of the Barnes-Wall lattice in 16 dimensions. Novel fast optimal algorithms for finding the best sequence in this Barnes-Wall shell innovation codebook are described. This algebraic codebook makes it possible to design a CELP coder at 9.6 kb/s with good quality and still implementable on a current digital-signal-processing chip. >

Proceedings ArticleDOI
23 May 1989
TL;DR: Results support the hypothesis that the higher orders of PLP contain significant speaker-specific information, with ASI performance improving rapidly up to order 8, and then far more slowly yet consistently up toOrder 16, and a similar pattern is seen for codebook size, with fast improvements up to size 64, with more gradual gains thereafter.
Abstract: Results of an experimental study and the optimization of features for a conventional vector-quantization codebook-based automatic speaker identification (ASI) system are presented. Standard LPC (linear predictive coding) and a perceptually weighted feature termed PLP (perceptually based linear prediction) are compared using appropriate distance measures, namely, the log-likelihood, and three cepstral variants: constant weighting, the robot-power-sum, and the inverse variance. PLP features combined with a weighted cepstral measure are found to be consistently the best in a number of different digit-independent ASI experiments. Results support the hypothesis that the higher orders of PLP (>5) contain significant speaker-specific information, with ASI performance improving rapidly up to order 8, and then far more slowly yet consistently up to order 16. A similar pattern is seen for codebook size, with fast improvements up to size 64, with more gradual gains thereafter. >

Patent
09 Jun 1989
TL;DR: In this paper, a vector quantization method was proposed to find the vector vector that is closest to the norm of an input vector and identify a reference codebook vector which has a norm similar to the vector norm of the input vector.
Abstract: A method for compressing data employing vector quantization is achieved by calculating the norm of an input vector and identifying a reference codebook vector which has a norm which is closest to the norm of the input vector. The distance between the input vector and the reference codebook vector selected is computed and employed to identify a vector space about the reference vector containing a subset of codebook vectors one or more of which may be closer to the input vector than the initially selected reference vector. The closest codebook vector is selected iteratively without the necessity of searching every vector in the codebook.

Proceedings ArticleDOI
23 May 1989
TL;DR: The authors show that they can further reduce the computational complexity and the storage requirements of the coder, while improving the perceptual quality of the reconstructed speech.
Abstract: Code-excited linear predictive coding (CELP) is a recent vector waveform coding technique which permits the encoding of telephone speech with high quality at very low bit rates. The authors show that they can further reduce the computational complexity and the storage requirements of the coder, while improving the perceptual quality of the reconstructed speech. These improvements are achieved by two key factors: the implementation of a noise-shaping effect by alternate estimation of the short-term predictor coefficients, and the use of a fixed/adaptive codebook together with a long-term predictor. >

Proceedings ArticleDOI
23 May 1989
TL;DR: A code-book approach is developed for glottal-pulse modeling of speech, allowing for efficient minimization of an objective measure of distortion, consistent with ordinary CELP (code-excited linear prediction) analysis.
Abstract: A code-book approach is developed for glottal-pulse modeling of speech. The authors extend previous methods of glottal pulse analysis, suggesting a practical scheme suitable for speech coding and compression. The scheme is based on using a codebook of glottal pulse signals, thereby allowing for efficient minimization of an objective measure of distortion, consistent with ordinary CELP (code-excited linear prediction) analysis. Subjective and objective quality, as well as analysis complexity, compare favorably with established methods, such as stochastic 1024-CELP coders. Moreover, the proposed coder is robust. >

Patent
31 May 1989
TL;DR: In this paper, the quantizer generates a compressed signal by replacing each input vector in a frame with an ID code of a closely matching codebook vector, which is then processed on subsequent frames to achieve additional compression.
Abstract: Post-quantization processing is disclosed in which frames of data are organized into frames of input vectors. For each input vector, the quantizer identifiers a closely matching codebook vector. The quantizer generates a compressed signal by replacing each input vector in a frame with an ID code of a closely matching codebook vector. On subsequent frames, the quantizer further processes ID codes to achieve additional compression. In one embodiment, the ID codes from one frame are compared to the corresponding ID codes in the previous frames. If the ID code from the subsequent frame (new ID code) is the same as the corresponding ID code from the previous frame (old ID code), the new ID code is eliminated from the frame and a tag bit is set to indicate that the ID Code was eliminated. Similarly, if the new ID code represents a vector which is only slightly different from the vector represented by the old ID code, the new ID code is replaced with a tag bit. In this way, transmission of the subsequent frames requires only the transmission of vectors which differ by a significant amount from the prior frame. In other embodiments, other post processing methods are used. For example, lossless coding techniques, such as the socalled Lempel-Ziv and Huffman codes are discussed.

Patent
06 Apr 1989
TL;DR: In this paper, an electronic circuit for quantizing vector signals is presented, where the circuit compares input vectors with codebook entries that are representative of a vector space and simultaneously compute the distance between the image vectors and the codebook entry.
Abstract: An electronic circuit for quantizing vector signals is provided. The circuit compares input vectors with codebook entries that are representative of a vector space. Processors simultaneously compute the distance between the image vectors and the codebook entries. Selection circuitry is provided which compares the distance values and generates outputs which are representative of the codebook vector having the smallest distance and the distance itself.

Proceedings ArticleDOI
23 May 1989
TL;DR: Two novel techniques for use in VXC (vector excitation coding) speech coders are presented, a generalization of the gain-optimized error measure which allows any number of gains to be calculated for each excitation vector.
Abstract: Two novel techniques for use in VXC (vector excitation coding) speech coders are presented. The first enables massive excitation codebooks (>or=20 b) to be used at realizable complexities by using a novel spherical lattice codebook for the excitation codebook. The second technique is a generalization of the gain-optimized error measure which allows any number of gains to be calculated for each excitation vector. This multiple-gain VXC can be thought of as a hybrid between multipulse and VXC. >

Journal ArticleDOI
TL;DR: A new analysis-by-synthesis speech coding approach able to produce good quality speech in the vicinity of 4.8 kbit/s is presented, based on the ternary code excitation CELP introduced previously.
Abstract: A new analysis-by-synthesis speech coding approach able to produce good quality speech in the vicinity of 4.8 kbit/s is presented. The new approach produces the same speech quality as obtained by CELP codecs without needing any excitation codebook storage. The new coder employs a very simple excitation search procedure and processes an inherent robustness against channel errors. The approach is based on the ternary code excitation CELP introduced previously (see P. Desantis et al. 1986).

Proceedings ArticleDOI
27 Nov 1989
TL;DR: Novel 2.4-kb/s linear predictive speech coders based on the analysis-by-syntheses method are proposed and found that the model which selects the excitation signal from either a random sequence codebook or a pitch synthesizer produces the best perceived quality speech.
Abstract: Novel 2.4-kb/s linear predictive speech coders based on the analysis-by-syntheses method are proposed. The introduction of a perceptually weighted distortion measure between the original speech and the reconstructed speech implicitly optimizes both the voiced/unvoiced decision and the pitch estimation/tracking. The coders are also shown to be more robust to background acoustic noises. The resultant speech quality is significantly enhanced by judicious parameter coding. Three excitation models are proposed and investigated. It is found that the model which selects the excitation signal from either a random sequence codebook or a pitch synthesizer produces the best perceived quality speech. >

Journal Article
TL;DR: The possible use of Kohonen's neural network to vector quantize images is investigated, based on the concurrent use of five networks where the effect of varions relevant parameters is studied.
Abstract: In this article, the possible use of Kohonen's neural network to vector quantize images is investigated . Some theoretical results on convergence of the training process are first given . Then, results obtained for varions codebook sizes and input dimensions are compared. Tests are then performed with the best parameter values, using several images to design codebooks . This approach is based on the concurrent use of five networks where the effect of varions relevant parameters is studied, such as number of classes of vectors, vectors dimension, and number ofvectors used for coding .

Proceedings ArticleDOI
09 Apr 1989
TL;DR: An adaptive algorithm for vector quantization (VQ) whose codebook is constantly being updated by the most recent past input vector, therefore eliminating the need for long training-sequence processes as with regular VQ.
Abstract: The authors discuss an adaptive algorithm for vector quantization (VQ) whose codebook is constantly being updated by the most recent past input vector. Using this approach, the VQ codebook is continuously training on new data, therefore eliminating the need for long training-sequence processes as with regular VQ . If identical adaptation rules are used at the encoder and decoder, no side information is sent and the system behaves as backward-looking adaptive scalar quantizers. Adaptive-vector-quantization (AVQ) also has the advantage of less degradation with varying statistics on the input signal as compared with regular VQ. Simulation results are presented outlining the bit-rate-vs.-SNR (signal-to-noise ratio) performance for the AVQ system. It is shown that, although AVQ performs 1-3-dB SNR worse than regular VQ (at 1 b/sample) inside the training sequence, AVQ outperforms regular VQ by as much as 12 dB outside the training sequence. This improvement implies the realizability of adaptive VQ for real-time digital coding systems. >