scispace - formally typeset
Search or ask a question

Showing papers on "Codebook published in 1993"


PatentDOI
TL;DR: In this article, a wideband speech signal (8 kHz) of high quantity is reconstructed from a narrowband speech signals (300 Hz to 3.4 kHz) by LPC-analyzing to obtain spectrum information parameters.
Abstract: A wideband speech signal (8 kHz, for example) of high quantity is reconstructed from a narrowband speech signal (300 Hz to 3.4 kHz). The input narrowband speech signal is LPC-analyzed to obtain spectrum information parameters, and the parameters are vector-quantized using a narrowband speech signal codebook. For each code number of the narrowband speech signal codebook, the wideband speech waveform corresponding to the codevector concerned is extracted by one pitch for voiced speech and by one frame for unvoiced speech and prestored in a representative waveform codebook. Representative waveform segments corresponding to the respective output codevector numbers of the quantizer are extracted from the representative waveform codebook. Voiced speech is synthesized by pitch-synchronous overlapping of the extracted representative waveform segments and unvoiced speech is synthesized by randomly using waveforms of one frame length. By this, a wideband speech signal is produced. Then, frequency components below 300 Hz and above 3.4 kHz are extracted from the wideband speech signal and are added to an up-sampled version of the input narrowband speech signal to thereby reconstruct the wideband speech signal.

219 citations


Journal ArticleDOI
TL;DR: It is shown experimentally that as the number of stages is increased above the optimal performance/complexity tradeoff, the quantizer robustness and outlier performance can be improved at the expense of a slight increase in rate.
Abstract: A tree-searched multistage vector quantization (VQ) scheme for linear prediction coding (LPC) parameters which achieves spectral distortion lower than 1 dB with low complexity and good robustness using rates as low as 22 b/frame is presented. The M-L search is used, and it is shown that it achieves performance close to that of the optimal search for a relatively small M. A joint codebook design strategy for multistage VQ which improves convergence speed and the VQ performance measures is presented. The best performance/complexity tradeoffs are obtained with relatively small size codebooks cascaded in a 3-6 stage configuration. It is shown experimentally that as the number of stages is increased above the optimal performance/complexity tradeoff, the quantizer robustness and outlier performance can be improved at the expense of a slight increase in rate. Results for log area ratio (LAR) and line spectral pairs (LSPs) parameters are presented. A training technique that reduces outliers at the expense of a slight average performance degradation is introduced. The method significantly outperforms the split codebook approach. >

201 citations


Journal ArticleDOI
01 Jul 1993
TL;DR: Two combined unequal error protection (UEP) coding and modulation schemes are proposed, based on partitioning a signal constellation into disjoint subsets in which the most important data sequence is encoded, using most of the available redundancy, to specify a sequence of subsets.
Abstract: Two combined unequal error protection (UEP) coding and modulation schemes are proposed. The first method multiplexes different coded signal constellations, with each coded constellation providing a different level of error protection. In this method, a codeword specifies the multiplexing rule and the choice of the codeword from a fixed codebook is used to convey additional important information. The decoder determines the multiplexing rule before decoding the rest of the data. The second method is based on partitioning a signal constellation into disjoint subsets in which the most important data sequence is encoded, using most of the available redundancy, to specify a sequence of subsets. The partitioning and code construction is done to maximize the minimum Euclidean distance between two different valid subset sequences. This leads to ways of partitioning the signal constellations into subsets. The less important data selects a sequence of signal points to be transmitted from the subsets. A side benefit of the proposed set partitioning procedure is a reduction in the number of nearest neighbors, sometimes even over the uncoded signal constellation. >

200 citations


Journal ArticleDOI
S.-W. Ra1, J.-K. Kim1
TL;DR: A new fast search algorithm for vector quantization using the mean of image vectors is proposed, showing that the number of calculations can be reduced to as low as a fourth the number achievable by an algorithm known as the partial distance method.
Abstract: A new fast search algorithm for vector quantization using the mean of image vectors is proposed. The codevectors are sorted according to their component means, and the search for the codevector having the minimum Euclidean-distance to a given input vector starts with the one having the minimum mean-distance to it, making use of our observation that the two codevectors are close to each other in most real images. The search is then made to terminate as soon as a simple yet novel test reports that any remaining vector in the codebook should have a larger Euclidean distance. Simulations show that the number of calculations can be reduced to as low as a fourth the number achievable by an algorithm known as the partial distance method. >

190 citations


Patent
Erik P. Staats1
23 Jun 1993
TL;DR: The vector quantization codebook and the thresholds are used by a vector quantizer to encode a set of input vectors (V 1 -V TOT ). The determination that a distance between a vector to be encoded and a quantized vector in a codebook is less than the associated threshold causes a search for the closest vector to terminate for a nearest neighbor vector quantifier.
Abstract: Methods and apparatus for vector quantization. A threshold generator generates an i threshold (Threshold i ) to be associated with each i quantized vector of n quantized vectors in a vector quantization codebook. The vector quantization codebook and the thresholds are used by a vector quantizer to encode a set of input vectors (V 1 -V TOT ). The determination that a distance between a vector to be encoded and a quantized vector in a codebook is less than the associated threshold causes a search for the closest vector to terminate for a nearest neighbor vector quantizer. In some embodiments, the vectors comprise samples of continuous signals for sound containing speech, or display signals. In other embodiments, codebook vectors are arranged from most frequently encoded vectors to least frequently encoded vectors.

125 citations


Journal ArticleDOI
TL;DR: The approach establishes a unifying framework for different quantization methods like K-means clustering and its fuzzy version, entropy constrained vector quantization or topological feature maps, and competitive neural networks.
Abstract: Vector quantization is a data compression method by which a set of data points is encoded by a reduced set of reference vectors: the codebook. A vector quantization strategy is discussed that jointly optimizes distortion errors and the codebook complexity, thereby determining the size of the codebook. A maximum entropy estimation of the cost function yields an optimal number of reference vectors, their positions, and their assignment probabilities. The dependence of the codebook density on the data density for different complexity functions is investigated in the limit of asymptotic quantization levels. How different complexity measures influence the efficiency of vector quantizers is studied for the task of image compression. The wavelet coefficients of gray-level images are quantized, and the reconstruction error is measured. The approach establishes a unifying framework for different quantization methods like K-means clustering and its fuzzy version, entropy constrained vector quantization or topological feature maps, and competitive neural networks. >

106 citations


Journal ArticleDOI
TL;DR: It is shown that although an optimal encoding can be implemented by a sequential encoder, the complexity of implementing optimal stagewise partitions generally exceeds the complexityof an exhaustive search of the direct sum codebook.
Abstract: The use of direct sum codebooks to minimize the memory requirements of vector quantizers is investigated. Assuming arbitrary fixed partitions, necessary conditions for minimum distortion codebooks are derived, first for scalar codebooks, assuming mean-squared error distortion, and then for vector codebooks and a broader class of distortion measures. An iterative procedure is described for designing locally optimal direct sum codebooks. Both optimal and computationally efficient suboptimal encoding schemes are considered. It is shown that although an optimal encoding can be implemented by a sequential encoder, the complexity of implementing optimal stagewise partitions generally exceeds the complexity of an exhaustive search of the direct sum codebook. It is also shown that sequential nearest-neighbor encoders can be extremely inefficient. The M-search method is explored as one method of improving the effectiveness of suboptimal sequential encoders. Representative results for simulated direct sum quantizers are presented. >

96 citations


Journal ArticleDOI
TL;DR: A new directed-search binary-splitting method which reduces the complexity of the LBG algorithm, and a new initial codebooks selection method which can obtain a good initial codebook is presented.
Abstract: A review and a performance comparison of several often-used vector quantization (VQ) codebook generation algorithms are presented. The codebook generation algorithms discussed include the Linde-Buzo-Gray (LBG) binary-splitting algorithm, the pairwise nearest-neighbor algorithm, the simulated annealing algorithm, and the fuzzy c-means clustering analysis algorithm. A new directed-search binary-splitting method which reduces the complexity of the LBG algorithm, is presented. Also, a new initial codebook selection method which can obtain a good initial codebook is presented. By using this initial codebook selection algorithm, the overall LBG codebook generation time can be reduced by a factor of 1.5-2. >

88 citations


Proceedings ArticleDOI
25 Oct 1993
TL;DR: This paper introduces a compressed volume format that not only reduces storage space and transmission time, but is designed for fast volume rendering as well, and extends these ideas to a new volume format for compressed volume rendering.
Abstract: Volume rendering has been proposed as a useful tool for extracting information from large datasets, where non-visual analysis alone may not be feasible. The scale of these applications implies that data management is an important issue that needs to be addressed. Most volume rendering algorithms, however, process data in raw, uncompressed form. In previous work, we introduced a compressed volume format that may be volume rendered directly with minimal impact on rendering time. In this paper, we extend these ideas to a new volume format that not only reduces storage space and transmission time, but is designed for fast volume rendering as well. The volume dataset is represented as indices into a small codebook of representative blocks. With the data structure, volume shading calculations need only be performed on the codebook and image generation is accelerated by reusing precomputed block projections. >

84 citations


Journal ArticleDOI
TL;DR: The results indicate that the discrete cosine transform (DCT) is the best transform to use in transform-based encryption and a modification of the DCT-based scheme which significantly improves the security of the scrambler is proposed.
Abstract: Four discrete orthogonal transforms have been evaluated for their suitability for use in transform-based analog speech encryption. Subjective as well as objective tests were conducted to compare the residual intelligibility and the recovered speech quality under channel conditions. The cryptanalytic strengths of the schemes were then compared by applying a novel cryptanalytic attack which exploits the redundancy of speech using a spectral vector codebook. The results indicate that the discrete cosine transform (DCT) is the best transform to use in transform-based encryption. A modification of the DCT-based scheme which significantly improves the security of the scrambler is proposed. >

74 citations


Proceedings ArticleDOI
29 Oct 1993
TL;DR: In this article, the authors proposed a hierarchical encoding scheme which is based upon a two level codebook search and a structural classification of its entries, which increases the reconstruction quality compared to the full search with a fraction of its computational effort.
Abstract: This paper presents a method for fast encoding of still images based on iterated function systems (IFSs). The major disadvantage of this coding approach, usually referred to as fractal coding, is the high computational effort of the encoding process compared to e.g. the JPEG algorithm. This is mainly due to the costly 'full search' of the transform parameters within a fractal codebook. We therefore propose an hierarchical encoding scheme which is based upon a two level codebook search and a structural classification of its entries. By this way only a small subset of the codebook has to be considered, which increases encoding speed significantly. Refining the initial codebook and applying a second search even increases the reconstruction quality compared to the full search but with a fraction of its computational effort.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: A new variable-rate side-match finite-state vector quantization with a block classifier (CSMVQ) algorithm is described, and the improvement over SMVQ can be up to 3 dB at nearly the same bit rate.
Abstract: Future B-ISDN (broadband integrated services digital network) users will be able to send various kinds of information, such as voice, data, and image, over the same network and send information only when necessary. It has been recognized that variable-rate encoding techniques are more suitable than fixed-rate techniques for encoding images in a B-ISDN environment. A new variable-rate side-match finite-state vector quantization with a block classifier (CSMVQ) algorithm is described. In an ordinary fixed-rate SMVQ, the size of the state codebook is fixed. In the CSMVQ algorithm presented, the size of the state codebook is changed according to the characteristics of the current vector which can be predicted by a block classifier. In experiments, the improvement over SMVQ was up to 1.761 dB at a lower bit rate. Moreover, the improvement over VQ can be up to 3 dB at nearly the same bit rate. >

Journal ArticleDOI
TL;DR: This paper describes the use of artificial neural networks for acoustic to articulatory parameter mapping, and shows that a single feed‐forward neural net is unable to perform this mapping sufficiently well when trained on a large data set.
Abstract: A long‐standing problem in the analysis and synthesis of speech by articulatory description is the estimation of the vocal tract shape parameters from natural input speech. Methods to relate spectral parameters to articulatory positions are feasible if a sufficiently large amount of data is available. This, however, results in a high computational load and large memory requirements. Further, one needs to accommodate ambiguities in this mapping due to the nonuniqueness problem (i.e., several vocal tract shapes can result in identical spectral envelopes). This paper describes the use of artificial neural networks for acoustic to articulatory parameter mapping. Experimental results show that a single feed‐forward neural net is unable to perform this mapping sufficiently well when trained on a large data set. An alternative procedure is proposed, based on an assembly of neural networks. Each network is designated to a specific region in the articulatory space, and performs a mapping from cepstral values into tract areas. The training of this assembly is executed in two stages: In the first stage, a codebook of suitably normalized articulatory parameters is used, and in the second stage, real speech data are used to further improve the mapping. During synthesis, neural networks are selected by dynamic programming using a criterion that ensures smoothly varying vocal tract shapes while maintaining a good spectral match. The method is able to accommodate nonuniqueness in acoustic‐to‐articulatory mapping and can be bootstrapped efficiently from natural speech. Results on the performance of this procedure compared to other mapping procedures, including codebook look‐up and a single multilayered network, are presented.

Book ChapterDOI
13 Sep 1993
TL;DR: A new vector quantization method is proposed which generates codebooks incrementally by inserting vectors in areas of the input vector space where the quantization error is especially high until the desired number of codebook vectors is reached.
Abstract: A new vector quantization method is proposed which generates codebooks incrementally. New vectors are inserted in areas of the input vector space where the quantization error is especially high until the desired number of codebook vectors is reached. A one-dimensional topological neighborhood makes it possible to interpolate new vectors from existing ones. Vectors not contributing to error minimization are removed. After the desired number of vectors is reached, a stochastic approximation phase fine tunes the codebook. The final quality of the codebooks is exceptional. A comparison with two well-known methods for vector quantization was performed by solving an image compression problem. The results indicate that the new method is significantly better than both other approaches.

Journal ArticleDOI
TL;DR: A fast algorithm for full-search vector quantisation is proposed, which exploits the statistical properties of the source as well as the topological structure of the codebook, and the saving increases with increasing codebook size.
Abstract: A fast algorithm for full-search vector quantisation is proposed, which exploits the statistical properties of the source as well as the topological structure of the codebook. The computational complexity reduces to a few percent relative to the standard full search, and the saving increases with increasing codebook size.

Journal ArticleDOI
TL;DR: A new predictive vector quantization (PVQ) technique capable of exploring the nonlinear dependencies in addition to the linear dependencies that exist between adjacent blocks (vectors) of pixels is introduced.
Abstract: A new predictive vector quantization (PVQ) technique capable of exploring the nonlinear dependencies in addition to the linear dependencies that exist between adjacent blocks (vectors) of pixels is introduced. The two components of the PVQ scheme, the vector predictor and the vector quantizer, are implemented by two different classes of neural networks. A multilayer perceptron is used for the predictive cornponent and Kohonen self-organizing feature maps are used to design the codebook for the vector quantizer. The multilayer perceptron uses the nonlinearity condition associated with its processing units to perform a nonlinear vector prediction. The second component of the PVQ scheme vector quantizes the residual vector that is formed by subtracting the output of the perceptron from the original input vector. The joint-optimization task of designing the two components of the PVQ scheme is also achieved. Simulation results are presented for still images with high visual quality.

Patent
01 Apr 1993
TL;DR: The vector quantization (VQ) and Huffman coding (Huffman coding) methods for compressing data in a system employing VQ and HCC are described in this paper.
Abstract: Methods for compressing data in a system employing vector quantization (VQ) and Huffman coding comprise: First, quantizing an input vector by representing the input vector with a VQ codevector selected from a VQ codebook partitioned into subsets, wherein each subset comprises codevectors and each codevector is stored at a corresponding address in the VQ codebook. Next, generating a rate dependent Huffman codeword for the selected codevector, wherein the rate dependent Huffman codeword identifies the subset of the VQ codebook in which the selected codevector is stored. And finally, generating a substantially rate independent Huffman codeword for the selected codevector, wherein the substantially rate independent Huffman codeword identifies a particular VQ codevector within the subset identified by the rate dependent Huffman codeword.

Journal ArticleDOI
TL;DR: Modular linearly connected VLSI architectures for VQ that can support real-time image processing applications and require fixed I/O bandwidth with the host and allow codebook changes are proposed.
Abstract: Vector quantization (VQ) has become feasible to be employed for real-time applications by using VLSI technology. In this paper, the authors propose modular linearly connected VLSI architectures for VQ that can support real-time image processing applications. Each processing element in the design consists of an adder and a shift register instead of a multiplier. The designs require fixed I/O bandwidth with the host and allow codebook changes. The throughput is independent of the codebook size. These designs can be extended to the case when a fixed number of processors are available. A number of VQ schemes-single-stage and multistage VQ, classified VQ, etc.-can be implemented using this approach. >

Patent
Ke-Chiang Chu1
30 Apr 1993
TL;DR: An improved multi-codebook phase-in coding process for coding electronic data was proposed in this paper, where for each received electronic input data, the coding process detects whether that input data exceeds a current coding maximum, then selecting a codebook coding method from one or more codebook Coding methods in response to detecting whether that data exceeds the current Coding maximum, and then encoding that input input data in accordance to the selected codebook coding method to generate a coded output data.
Abstract: An improved multi-codebook phase-in coding process for coding electronic data wherein for each received electronic input data, the coding process detects whether that input data exceeds a current coding maximum, then selecting a codebook coding method from one or more codebook coding methods in response to detecting whether that input data exceeds the current coding maximum, and then encoding that input data in accordance to the selected codebook coding method to generate a coded output data. A corresponding codebook indicator is inserted into a generated coded output data stream to indicate which codebook method to use to decode the coded output data. During decoding, the decoding process detects for a decode method indicator associated with each encoded input data, and decodes in accordance to a decode method corresponding to the detected decode method indicator to generate a decoded output data.

Proceedings ArticleDOI
27 Apr 1993
TL;DR: A high-quality 8-bit/s speech coder based on CS:CELP (conjugate structure code excited linear prediction) with 10 ms frame length is presented and it is found that the proposed coder is robust against random bit errors.
Abstract: A high-quality 8-bit/s speech coder based on CS:CELP (conjugate structure code excited linear prediction) with 10 ms frame length is presented. To provide high quality in both error-free and error conditions, it uses four schemes: LSP (line spectrum pair) quantization using interframe correlation, preselection of codebook search, a conjugate structure, and backward adaptation of the VQ (vector quantization) gain. LSP parameters are quantized by multistage VQ with MA prediction. The preselection of the codebook reduces computational complexity and improves robustness. The CS improves the ability to handle random bit errors and reduces memory requirements. The backward adaptation of the VQ gain provides high quality and robustness without having to transmit input speech power information. Subjective testing indicates that the quality of the proposed coder is equivalent to that of the 32 kbit/s ADPCM (adaptive differential pulse code modulation) under error-free conditions. It is also found that the proposed coder is robust against random bit errors. >

PatentDOI
TL;DR: An adaptive pitch pulse enhancer and method, adaptive to a voicing measure of input speech, for modifying the adaptive codebook of a CELP search loop to enhance the pitch pulse structure was proposed in this paper.
Abstract: An adaptive pitch pulse enhancer and method, adaptive to a voicing measure of input speech, for modifying the adaptive codebook of a CELP search loop to enhance the pitch pulse structure of the adaptive codebook. The adaptive pitch pulse enhancer determines a voicing measure of an input signal, the voicing measure being voiced when the input signal includes voiced speech and the voicing measure being unvoiced when the input signal does not include voiced speech, modifies a total excitation vector produced by the CELP search loop in accordance with the voicing measure of the input signal, and updates the adaptive codebook of the CELP search loop by storing the modified total excitation vector in the adaptive codebook.

Patent
11 May 1993
TL;DR: In this paper, an image compression coding method of a digital image transmission system was proposed, which utilizes a known vector quantization process and uses a classified codebook for image compression.
Abstract: The invention relates to an image compression coding method of a digital image transmission system. The method utilizes a known vector quantization process. In the method of the invention, at the transmitting end the block (13) to be coded is divided into quadrants (14); each quadrant is subjected to vector quantization in a sub-coder (19), utilizing a classified codebook (20), the class thereof being defined on the basis of the vector index (i) of the original block and the quadrant label (A', B', C', D') of the sub-block; the vector index (j) of the sub-block with respect to this classified code book (20) is transmitted to the receiving end; at the receiving end, there is utilized a classified codebook which is chosen on the basis of the transmitted vector index (i) of the original block and the quadrant label (A', B', C', D') of the sub-block; at the receiving end, the chosen classified codebook, which is identical with the classified codebook at the transmitting end, the code vector in question is looked up on the basis of the vector index of the transmitted sub-block, so that a reconstruction is obtained for the sub-block. When designing each classified codebook, the utilized training image material is that part of the original training image set which receives a class index corresponding to the particular class of the codebook, when the above described method is applied to the training image set itself.

Proceedings ArticleDOI
18 Aug 1993
TL;DR: The paper proposes a new approach for efficient SAR raw data compression which consists of first compressing the raw data with the block adaptive quantizer (BAQ) and then performing a compression with vectorquantizer (VQ) algorithm.
Abstract: The paper proposes a new approach for efficient SAR raw data compression which consists of first compressing the raw data with the block adaptive quantizer (BAQ) and then performing a compression with vector quantizer (VQ) algorithm. This combination is computationally efficient since only one small codebook is needed for the coding which is independent from the data set. A signal-to-distortion ratio better than 12 dB and 6 dB is achieved for a data rate of 2 and 1 bit/sample, respectively. >

PatentDOI
TL;DR: There is provided a code excitation linear predictive coding or decoding apparatus in which a code vector, which is transmitted by a codebook such as a stochastic codebook, is converted adaptively in accordance with vocal tract analysis information (LPC) so that a high quality reproduction speech is obtained at a low coding rate.
Abstract: There is provided a code excitation linear predictive (CELP) coding or decoding apparatus in which a code vector, which is transmitted by a codebook such as a stochastic codebook, is converted adaptively in accordance with vocal tract analysis information (LPC) so that a high quality reproduction speech is obtained at a low coding rate. Further, in order to obtain a similar effect, a pulse-like excitation codebook formed of an isolated impulse is provided in addition to the adaptive excitation codebook and stochastic excitation codebook so that either the stochastic excitation codebook or the pulse-like excitation codebook is selectively used to provide a vocal tract parameter as a linear spectrum pair parameter.

Journal ArticleDOI
TL;DR: Experimental results indicate that the new FSVQ system is very computationally efficient and achieves good picture quality at an average rate of 0.33 b per pixel (bpp).
Abstract: Presents an image compression method using a finite-state vector quantizer (FSVQ) with derailment compensation. The derailment problem in conventional FSVQ systems is discussed first. Then a novel scheme is presented to overcome the problem. In this scheme, each state is partitioned into K Voronoi regions. The corresponding region codebook is designed individually. A simple initial classifier based on a mean codebook is also presented to determine the state and the region for an input vector. The results show that the derailment problem has been completely attacked. Experimental results indicate that the new FSVQ system is very computationally efficient and achieves good picture quality at an average rate of 0.33 b per pixel (bpp). >

Proceedings ArticleDOI
30 Mar 1993
TL;DR: This proposed variation each codebook in the system is given a mean or 'prediction' value which is subtracted from all supervectors that map to the given codebook, and the chosen codebook's codewords are then used to encode the resulting residuals.
Abstract: Weighted universal vector quantization uses traditional codeword design techniques to design locally optimal multi-codebook systems. Application of this technique to a sequence of medical images produces a 10.3 dB improvement over standard full search vector quantization followed by entropy coding at the cost of increased complexity. In this proposed variation each codebook in the system is given a mean or 'prediction' value which is subtracted from all supervectors that map to the given codebook. The chosen codebook's codewords are then used to encode the resulting residuals. Application of the mean-removed system to the medical data set achieves up to 0.5 dB improvement at no rate expense. >

PatentDOI
TL;DR: In this paper, a codebook of excitation frames is used to derive the parameters of a synthesis filter and a suitable excitation, which is selected from a single-pulse excitation.
Abstract: Speech is analyzed to derive the parameters of a synthesis filter and the parameters of a suitable excitation which is selected from a codebook of excitation frames. The selection of the codebook entry is facilitated by determining a single-pulse excitation (e.g., using conventional multipulse excitation techniques), and using the position of this pulse to narrow the codebook search.

PatentDOI
Daniel Lin1
TL;DR: In this article, a speech communication system using a code excited linear prediction speech decoder is described, where the decoder uses a first codebook containing a first digital value sequence selected from the set of binary values {0, 1}.
Abstract: A speech communication system using a code excited linear prediction speech decoder. The decoder using a first codebook containing a first digital value sequence selected from the set of binary values {0, 1}. The decoder also using a second codebook containing a second digital value sequence having values selected from the set of binary values {−1, 0}. The first digital value sequence and the second digital value sequence are combined to become a third digital value sequence having a set of ternary values from the set of {−1, 0, 1}.

Journal ArticleDOI
01 Jun 1993
TL;DR: The results of employing neural network classification of states (NNCS) in finite state vector quantisation (FSVQ) are presented and it is shown that by using NNCS the required bit rate is about 0.25 bits/pixel at 30 dB peak SNR resulting in high quality reconstructed imagery, while the memory requirement is reduced by a factor of 256.
Abstract: The results of employing neural network classification of states (NNCS) in finite state vector quantisation (FSVQ) are presented. In addition to intrablock correlation, already exploited by vector quantisation (VQ), the new design takes advantage of the interblock spatial correlation in typical grey-level images. The main achievement of FSVQ techniques is to assure access to a large master codebook for quantising purposes, thus achieving high image quality, while utilising a small state codebook for the purpose of specifying the block label, thus utilising low bit rates. Typically, FSVQ techniques require a very large memory space for the storage of the numerous state codebooks. However, with NNCS the memory space requirements can be reduced by a large factor (about 102–103) to manageable size, with little or no impairment of image quality, in comparison to FSVQ. This is accomplished by a neural network classification of finite states into representation states, whose associated states all share the same codebook. This codebook is populated by the most frequently occurring codevectors in the representation state. Numerical and pictorial results of simulation experiments are presented for the image LENA. They show that by using NNCS the required bit rate is about 0.25 bits/pixel at 30 dB peak SNR resulting in high quality reconstructed imagery, while the memory requirement is reduced by a factor of 256.

Patent
15 Jun 1993
TL;DR: A vector quantization encoder comprises a codebook built with the so-called Linde-Buzo-Gray (LBG) algorithm, an intermediary code book built from the initial codebook and a Hash-table containing subsets of the original codebook.
Abstract: A vector quantization encoder comprises a codebook built with the so-called Linde-Buzo-Gray (LBG) algorithm, an intermediary codebook built from the initial codebook (LBG) and a Hash-table containing subsets of the initial codebook (LBG). Vector quantization is then performed in two steps. First, a multistep prequantization of an input vector gives an index in the Hash-table which points to a subset of vectors of the initial codebook (LBG). Then a full search is performed within the pointed subset of vectors of the initial codebook (LBG).