scispace - formally typeset
Search or ask a question

Showing papers on "Codebook published in 1992"


Journal ArticleDOI
TL;DR: A deterministic annealing approach is suggested to search for the optimal vector quantizer given a set of training data and the resulting codebook is independent of the codebook used to initialize the iterations.
Abstract: A deterministic annealing approach is suggested to search for the optimal vector quantizer given a set of training data. The problem is reformulated within a probabilistic framework. No prior knowledge is assumed on the source density, and the principle of maximum entropy is used to obtain the association probabilities at a given average distortion. The corresponding Lagrange multiplier is inversely related to the 'temperature' and is used to control the annealing process. In this process, as the temperature is lowered, the system undergoes a sequence of phase transitions when existing clusters split naturally, without use of heuristics. The resulting codebook is independent of the codebook used to initialize the iterations. >

280 citations


Proceedings ArticleDOI
07 Jun 1992
TL;DR: An overview of the software package LVQPAK, which has been developed for convenient and effective application of learning vector quantization algorithms, is presented and two new features are included: fast conflict-free initial distribution of codebook vectors into the class zones and the optimized-learning-rate algorithm OLVQ1.
Abstract: An overview of the software package LVQPAK, which has been developed for convenient and effective application of learning vector quantization algorithms, is presented. Two new features are included: fast conflict-free initial distribution of codebook vectors into the class zones and the optimized-learning-rate algorithm OLVQ1. >

148 citations


Journal ArticleDOI
TL;DR: An adaptive electronic neural network processor has been developed for high-speed image compression based on a frequency-sensitive self-organization algorithm that is quite efficient and can achieve near-optimal results.
Abstract: An adaptive electronic neural network processor has been developed for high-speed image compression based on a frequency-sensitive self-organization algorithm The performance of this self-organization network and that of a conventional algorithm for vector quantization are compared The proposed method is quite efficient and can achieve near-optimal results The neural network processor includes a pipelined codebook generator and a paralleled vector quantizer, which obtains a time complexity O(1) for each quantization vector A mixed-signal design technique with analog circuitry to perform neural computation and digital circuitry to process multiple-bit address information are used A prototype chip for a 25-D adaptive vector quantizer of 64 code words was designed, fabricated, and tested It occupies a silicon area of 46 mm*68 mm in a 20 mu m scalable CMOS technology and provides a computing capability as high as 32 billion connections/s The experimental results for the chip and the winner-take-all circuit test structure are presented >

135 citations


Journal ArticleDOI
TL;DR: An algorithm is introduced for the joint design of the stage codebooks to optimize the overall performance ofMultistage vector quantization and achieves a modest performance improvement.
Abstract: Multistage vector quantization (MSVQ) can achieve very low encoding and storage complexity in comparison to unstructured vector quantization. However, the conventional stage-by-stage design of the codebooks in MSVQ is suboptimal with respect to the overall performance measure. The authors introduce an algorithm for the joint design of the stage codebooks to optimize the overall performance. The performance improvement, although modest, is achieved with no effect on encoding or storage complexity and only a slight increase in design effort. >

83 citations


PatentDOI
Mei Yong1
TL;DR: In this article, a spectral interpolation (500, 600) and efficient excitation codebook search method (700) were developed for a Code-Excited Linear Predictive (CELP) speech coder.
Abstract: A novel spectral interpolation (500, 600) and efficient excitation codebook search method (700) developed for a Code-Excited Linear Predictive (CELP) speech coder (100) is set forth. The interpolation is performed on an impulse response of the spectral synthesis filter. As the result of using this new set of interpolation parameters, the computations associated with an excitation codebook search in a CELP coder are considerably reduced. Furthermore, a coder utilizing this new interpolation approach provides noticeable improvement in speech quality coded at low bit-rates.

61 citations


PatentDOI
TL;DR: An excitation vector of the previous frame stored in an adaptive codebook is cut out with a selected pitch period and is repeated until one frame is formed, by which a periodic component codevector is generated.
Abstract: An excitation vector of the previous frame stored in an adaptive codebook is cut out with a selected pitch period. The excitation vector thus cut out is repeated until one frame is formed, by which a periodic component codevector is generated. An optimum pitch period is searched for so that distortion of a reconstructed speech obtained by exciting a linear predictive synthesis filter with the periodic component codevector is minimized. Thereafter, a random codevector selected from a random codebook is cut out with the optimum pitch period and is repeated until one frame is formed, by which a repetitious random codevector is generated. The random codebook is searched for a random codevector which minimizes the distortion of the reconstructed speech which is provided by exciting the synthesis filter with the repetitious random codevector.

61 citations


Proceedings ArticleDOI
23 Mar 1992
TL;DR: A nonlinear generalization of the family of autoregressive signal models is introduced that leads to an interpolation strategy resembling a predictive counterpart to vector quantization for minimum mean-square error prediction.
Abstract: A nonlinear generalization of the family of autoregressive signal models is introduced. This generalization can be viewed as an autoregressive model with state-varying parameters. For such signals, minimum mean-square error prediction can be reformulated as an interpolation problem. A novel interpretation of the signal as a codebook for its own prediction leads to an interpolation strategy resembling a predictive counterpart to vector quantization. The applicability of this model is then demonstrated empirically for a variety of signals. >

50 citations


Journal ArticleDOI
TL;DR: An iterative algorithm for designing a set of locally optimal codebooks is developed and results demonstrate that this improved decoding technique can be applied in the JPEG baseline system to decode enhanced quality pictures from the bit stream generated by the standard encoding scheme.
Abstract: Transform coding, a simple yet efficient image coding technique, has been adopted by the Joint Photographic Experts Group (JPEG) as the basis for an emerging coding standard for compression of still images. However, for any given transform encoder, the conventional inverse transform decoder is suboptimal. Better performance can be obtained by a nonlinear interpolative decoder that performs table lookups to reconstruct the image blocks from the code indexes. Each received code index of an image block addresses a particular codebook to fetch a component vector. The image block can be reconstructed as the sum of the component vectors for that block. An iterative algorithm for designing a set of locally optimal codebooks is developed. Computer simulation results demonstrate that this improved decoding technique can be applied in the JPEG baseline system to decode enhanced quality pictures from the bit stream generated by the standard encoding scheme. >

40 citations


Journal ArticleDOI
V.J. Mathews1
TL;DR: A gradient-based approach for codebook design that does not require any multiplications or median computation is proposed and the viability of multiplication-free predictive vector quantization of image data is demonstrated.
Abstract: The author considers vector quantization that uses the L/sub 1/ distortion measure for its implementation. A gradient-based approach for codebook design that does not require any multiplications or median computation is proposed. Convergence of this method is proved rigorously under very mild conditions. Simulation examples comparing the performance of this technique with the LBG algorithm show that the gradient-based method, in spite of its simplicity, produces codebooks with average distortions that are comparable to the LBG algorithm. The codebook design algorithm is then extended to a distortion measure that has piecewise-linear characteristics. Once again, by appropriate selection of the parameters of the distortion measure, the encoding as well as the codebook design can be implemented with zero multiplications. The author applies the techniques in predictive vector quantization of images and demonstrates the viability of multiplication-free predictive vector quantization of image data. >

35 citations


Proceedings ArticleDOI
24 Mar 1992
TL;DR: The results from these two VQ techniques have been compared in compression ratios for a given mean squared error (MSE) and the authors have used MasPar, a SIMD machine for this purpose.
Abstract: Progressive transmission (PT) using vector quantization (VQ) is called progressive vector quantization (PVQ) and is used for efficient telebrowsing and dissemination of multispectral image data via computer networks. Theoretically any compression technique can be used in PT mode. Here VQ is selected as the baseline compression technique because the VQ encoded images can be decoded by simple table lookup process so that the users are not burdened with computational problems for using compressed data. Codebook generation or training phase is the most critical part of VQ. Two different algorithms have been used for this purpose. The first of these is based on well-known Linde-Buzo-Gray (LBG) algorithm. The other one is based on self organizing feature maps (SOFM). Since both training and encoding are computationally intensive tasks, the authors have used MasPar, a SIMD machine for this purpose. The multispectral imagery obtained from Advanced Very High Resolution Radiometer (AVHRR) instrument images form the testbed. The results from these two VQ techniques have been compared in compression ratios for a given mean squared error (MSE). The number of bytes required to transmit the image data without loss using this progressive compression technique is usually less than the number of bytes required by standard unix compress algorithm. >

35 citations


PatentDOI
Toshiki Miyano1, Kazunori Ozawa1
TL;DR: In this article, an autocorrelation of a synthesis signal synthesized from a codevector of an excitation codebook (140) and a linear predictive parameter of an input speech signal is corrected using a linear predictor.
Abstract: A speech coding method which can code a speech signal at a bit rate of 8 kb/s or less by a comparatively small amount of calculation to obtain a good sound quality. An autocorrelation of a synthesis signal synthesized from a codevector of an excitation codebook (140) and a linear predictive parameter of an input speech signal is corrected using an autocorrelation of a synthesis signal synthesized from a codevector of an adaptive codebook (120) and a linear predictive parameter and a cross-correlation between the synthesis signal of the codevector of the adaptive codebook (120) and the synthesis signal of the codevector of the excitation codebook (140). A gain codebook (210) is searched using the corrected autocorrelation and a cross-correlation between a signal obtained by subtraction of the synthesis signal of the codevector of the adaptive codebook (120) from the input speech signal and the synthesis signal of the codevector of the excitation codebook (140).

Journal ArticleDOI
TL;DR: A modified version of the original algorithm is introduced, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree.
Abstract: A pruning algorithm of P.A. Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates. >

PatentDOI
TL;DR: Speech coding of the code excited linear predictive type is implemented by providing an excitation vector which comprises a set of a pre-determined number of pulse patterns from a codebook of P pulse patterns, which have a selected orientation and aPre-Determined delay with respect to the starting point of the excitation vectors.
Abstract: Speech coding of the code excited linear predictive type is implemented by providing an excitation vector which comprises a set of a pre-determined number of pulse patterns from a codebook of P pulse patterns, which have a selected orientation and a pre-determined delay with respect to the starting point of the excitation vector. This requires modest computational power and a small memory space, which allows it to be implemented in one signal processor.

Journal ArticleDOI
TL;DR: The design of an encoder for pruned tree-search vector quantization (VQ) is discussed, which allows near-optimal performance in a mean square error sense while keeping the hardware complexity low.
Abstract: The design of an encoder for pruned tree-search vector quantization (VQ) is discussed. This allows near-optimal performance in a mean square error sense while keeping the hardware complexity low. The encoder is partitioned into a slave processor chip that computes the distance and performs minimizations and an off-chip controller that directs the search. Pointer addressing is exploited in the codebook memory to keep the controller hardware simple. Inputs to the slave processor include the source vectors, the code vectors; and external control signals. The slave processor outputs the index of the code vector that best approximates the input in a mean square error sense. The layout for the slave processor has been generated using a 1.2- mu m CMOS library and measures 5.76*6.6 mm/sup 2/. Critical path simulation with SPICE indicates a throughput of 89 million multiply-accumulates per second. This implies that real-time processing at MPEG rates can be achieved if the number of levels (N7) and the number of children at any node (M) obey the constraint M*N >

PatentDOI
TL;DR: In this paper, a method for modeling time variant signals and multiple tone generating apparatus for a real-time controllable, time variant waveform synthesizer is presented, which is accomplished by storing a DSQ (Demodulated Segment Quantization) codebook representation of a time variant signal.
Abstract: A method for modelling time variant signals and multiple tone generating apparatus for a real time controllable, time variant waveform synthesizer. Speech or musical tone generation is accomplished by storing a DSQ (Demodulated Segment Quantization) codebook representation of a time variant signal. A DSQ codebook is a parametric representation of a time variant signal, wherein a signal's parameters are a time variant amplitude data sequence, a time variant pitch (advance/delay operator) data sequence, and a data sequence corresponding to a set of invariant waveshapes and their corresponding duration values. A signal is reconstructed by concatenating periodic segments of finite duration and, scaling its amplitude via a time variant amplitude data sequence and altering pitch or harmonic content via a time variant pitch data sequence. A plurality of unique DSQ codebooks and tone generators are assigned to a plurality of key actuations for multi-timbral operation.

Journal ArticleDOI
TL;DR: A real-time vector quantizer architecture for encoding color images is developed and contributes to the feasibility of the VLSI architecture through the use of a simple multiplication free distortion measure and reduction of the required memory per code vector.
Abstract: Digital image coding using vector quantization (VQ) based techniques provides low-bit rates and high quality coded images, at the expense of intensive computational demands. The computational requirement due to the encoding search process, had hindered application of VQ to real-time high-quality coding of color TV images. Reduction of the encoding search complexity through partitioning of a large codebook into the on-chip memories of a concurrent VLSI chip set is proposed. A real-time vector quantizer architecture for encoding color images is developed. The architecture maps the mean/quantized residual vector quantizer (MQRVQ) (an extension of mean/residual VQ) onto a VLSI/LSI chip set. The MQRVQ contributes to the feasibility of the VLSI architecture through the use of a simple multiplication free distortion measure and reduction of the required memory per code vector. Running at a clock rate of 25 MHz the proposed hardware implementation of this architecture is capable of real-time processing of 480*768 pixels per frame with a refreshing rate of 30 frames/s. The result is a real-time high-quality composite color image coder operating at a fixed rate of 1.12 b per pixel. >

Journal ArticleDOI
TL;DR: It is proposed that vector quantization be implemented for image compression based on neural networks and separate codebooks for edge and background blocks are designed using Kohonen (1984) self-organizing feature maps to preserve edge integrity and improve the efficiency of codebook design.
Abstract: It is proposed that vector quantization be implemented for image compression based on neural networks. Separate codebooks for edge and background blocks are designed using Kohonen (1984) self-organizing feature maps to preserve edge integrity and improve the efficiency of codebook design. A system architecture is proposed, and satisfactory performance is achieved. >

Journal ArticleDOI
TL;DR: A technique for reducing the complexity of spatial-domain image vector quantization (VQ) and a modified LBG algorithm incorporating the new distortion measure is proposed, where the codevector dimension is not reduced and a better image quality is guaranteed.
Abstract: A technique for reducing the complexity of spatial-domain image vector quantization (VQ) is proposed The conventional spatial domain distortion measure is replaced by a transform domain subspace distortion measure Due to the energy compaction properties of image transforms, the dimensionality of the subspace distortion measure can be reduced drastically without significantly affecting the performance of the new quantizer A modified LBG algorithm incorporating the new distortion measure is proposed Unlike conventional transform domain VQ, the codevector dimension is not reduced and a better image quality is guaranteed The performance and design considerations of a real-time image encoder using the techniques are investigated Compared with spatial domain a speed up in both codebook design time and search time is obtained for mean residual VQ, and the size of fast RAM is reduced by a factor of four Degradation of image quality is less than 04 dB in PSNR >

PatentDOI
TL;DR: An exemplary CELP coder where gain adaptation is performed using previous gain values in conjunction with an entry in a table comprising the logarithms of the root-mean-squared values of the codebook vectors, to predict the next gain value.
Abstract: An exemplary CELP coder where gain adaptation is performed using previous gain values in conjunction with an entry in a table comprising the logarithms of the root-mean-squared values of the codebook vectors, to predict the next gain value. Not only is this method less complex because the table entries are determined off-line, but in addition the use of a table at both the encoder and the decoder allows fixed-point/floating-point interoperability requirements to be met.

Patent
Juin-Hwey Chen1
03 Sep 1992
TL;DR: In this article, a low-bitrate (typically 8 kbit/s or less), low-delay digital coder and decoder based on Code Excited Linear Prediction for speech and similar signals features backward adaptive adjustment for codebook gain and short-term synthesis filter parameters and forward adaptive adjustment of long-term (pitch) synthesis filter parameter.
Abstract: A low-bitrate (typically 8 kbit/s or less), low-delay digital coder and decoder based on Code Excited Linear Prediction for speech and similar signals features backward adaptive adjustment for codebook gain and short-term synthesis filter parameters and forward adaptive adjustment of long-term (pitch) synthesis filter parameters. A highly efficient, low delay pitch parameter derivation and quantization permits overall delay which is a fraction of prior coding delays for equivalent speech quality at low bitrates.

Proceedings ArticleDOI
23 Mar 1992
TL;DR: A new residual vector quantizer (RVQ) design algorithm is modified so that the multistage structure can be exploited to produce variable-rate RVQ (VR-RV Q) systems, which show significant improvement over fixed- rate RVQ systems with the same block size.
Abstract: A new residual vector quantizer (RVQ) design algorithm is modified so that the multistage structure can be exploited to produce variable-rate RVQ (VR-RVQ) systems. VR-RVQ systems are shown to have very useful properties: (1) the codebook storage requirement and the search complexity are both reduced; (2) the VR-RVQ system is able to exploit the spatial variance of perceptually important information; and (3) the VR-RVQ codebook can operate over a wide range of rates, without having to store several codebooks. Experiments were performed using VR-RVQ systems with vectors of many sizes, and results show significant improvement over fixed-rate RVQ systems with the same block size. >

Proceedings ArticleDOI
06 Dec 1992
TL;DR: The real-time implementation of a wideband ACELP speech coder at 9.6 kb/s is presented and the quality of the encoded wideband speech was judged vastly superior to that of the original narrowband speech.
Abstract: The real-time implementation of a wideband ACELP speech coder at 9.6 kb/s is presented. The coder is implemented on a TMS320C30 floating-point DSP chip. The attempt to implement an ACELP coder for wideband speech in real time results in 3-4 times more complexity than that for narrowband speech. Very efficient algorithms for searching the pitch and codebook parameters have been introduced. The pitch search was brought down to 20% of real time by the combination of an efficient open-loop approach and a decimation procedure. The excitation search complexity was significantly reduced by using two codebooks. The first models the main features in the excitation and is very efficiently searched using focused search. The second has a simple structure and does not need exhaustive search. The quality of the encoded wideband speech at 9.6 kb/s was judged vastly superior to that of the original narrowband speech. >

Journal ArticleDOI
TL;DR: A codebook design algorithm based on a two-dimensional discrete cosine transform (2-D DCT) is presented for vector quantization (VQ) of images that results in a considerable reduction in computation time and shows better picture quality.
Abstract: A codebook design algorithm based on a two-dimensional discrete cosine transform (2-D DCT) is presented for vector quantization (VQ) of images. The significant features of training images are extracted by using the 2-D DCT. A codebook is generated by partitioning the training set into a binary tree. Each training vector at a nonterminal node of the binary tree is directed to one of the two descendants by comparing a single feature associated with that node to a threshold. Compared with the pairwise nearest neighbor (PNN) algorithm, the algorithm results in a considerable reduction in computation time and shows better picture quality. >

Journal ArticleDOI
TL;DR: An adaptive finite-state vector quantization (FSVQ) in which the bit rate and the encoding time can be reduced is described and a threshold is used in FSVQ to decide whether to switch to a full searching VQ.
Abstract: A coding algorithm must have the ability to adapt to changing image characteristics for image sequences. An adaptive finite-state vector quantization (FSVQ) in which the bit rate and the encoding time can be reduced is described. In order to improve the image quality and avoid producing a wrong state for an input vector, a threshold is used in FSVQ to decide whether to switch to a full searching VQ. The codebook is conditionally replenished according to a distortion threshold at a later time to reflect the local statistics of the current frame. After the codebook is replenished, one can quickly reconstruct the state codebooks of FSVQ using the state codebook selection algorithm. In the experiments, the improvement over the static SMVQ is up to 2.40 dB at nearly the same bit rate and the encoding time is only one-ninth the time required by the static SMVQ. Moreover, the improvement over the static VQ is up to 2.91 dB, and the encoding time is only three-fifths the time required by the static VQ for the image sequence 'Claire'. >

Patent
06 Nov 1992
TL;DR: In this article, a vector quantization method employs mirrored input vectors to increase the reproduction quality of transmitted vector quantized data and an identification code identifying the selected orientation is also transmitted.
Abstract: A vector quantization method employs mirrored input vectors to increase the reproduction quality of transmitted vector quantized data. A codevector (131) is selected from a vector quantization codebook for each possible orientation of an input vector. The codevector having the smallest distortion (142) relative to the input vector is selected for transmission (148). An identification code identifying the selected orientation is also transmitted (148).

Proceedings ArticleDOI
23 Mar 1992
TL;DR: The authors investigate three algorithms that orthogonalize codebooks in a multi-stage code excited linear prediction (CELP) speech coder and show that the recursive modified Gram-Schmidt algorithm extra computational cost is less than the other two.
Abstract: The authors investigate three algorithms that orthogonalize codebooks in a multi-stage code excited linear prediction (CELP) speech coder. They carry out the same processing, a locally optimal modeling of the perceptual signal, but the computational costs differ. The authors show that the recursive modified Gram-Schmidt algorithm extra computational cost is less than the other two. An orthogonal codebook is defined a priori and the authors observe an equivalence to orthogonal transform coding. Three methods based on the Karhunen-Loeve transform for designing this codebook are compared. A partitioned shape-gain VQ is applied in the transform domain. >

Patent
Miyano Toshiki1
02 Dec 1992
TL;DR: In this article, an adaptive codebook storing excitation signal determined in advance and a plurality of excitation codebooks for multi-stage vector quantization are provided, and a method for speech coding and a voice-coder for coding speech signals divided into frames spaced with a constant interval.
Abstract: A method for speech coding and a voice-coder for coding speech signals divided into frames spaced with a constant interval are disclosed. An adaptive codebook storing excitation signal determined in advance and a plurality of excitation codebooks for multi-stage vector quantization are provided. Each frame is divided into subframes. For each subframe, a candidate of a first predetermined number of adaptive codevectors is selected, and then candidates of each predetermined number of excitation codevectors are selected from each excitation codebook, respectively, by using the candidate of the adaptive codevector. Finally, a combination of the adaptive codevector and each of the excitation codevector is selected from the candidates of the adaptive codevector and each of the sound codevectors.

Proceedings ArticleDOI
23 Mar 1992
TL;DR: The authors describe the latest developments by the speech research group at CRIM in speaker-independent connected digit recognition, using hidden Markov models (HMMs) trained with maximum mutual information estimation (MMIE).
Abstract: The authors describe the latest developments by the speech research group at CRIM in speaker-independent connected digit recognition, using hidden Markov models (HMMs) trained with maximum mutual information estimation (MMIE). The work presented is a continuation of work previously described by the authors (see Proc. 1991 IEEE Inf. Conf. on Acoust. Speech and Sign. Process., pp.533-536). The main differences are: (1) use of the 20-kHz TI/NIST corpus available on CD-ROM (instead of the 10-kHz distribution tape), (2) use of word models (instead of sub-word units), (3) addition of second derivative parameters, and (4) a more elaborate training procedure for codebook exponents. The experiments described were all performed on the complete adult portion of the corpus. The baseline system, using discrete HMMs and MMIE, has a 0.67% word error rate and a 2.03% string error rate. The authors describe techniques that allowed them to improve greatly the recognition rate. >

Proceedings ArticleDOI
TL;DR: A feasibility study was conducted to investigate the advantages of data compression techniques on multispectral imagery data acquired from airborne scanners maintained and operated by NASA at the Stennis Space Center using spectral vector quantization.
Abstract: Spectral vector quantization is presently applied to multispectral imagery from airborne scanners; the vector is here defined as an array of pixels from the same location from each channel. Attention is given to the effect of varying the size of the vector codebook on the quality of the compression and on subsequent classification. The rate of compression is programmable, but higher degradation is associated with higher compression ratios.

Proceedings ArticleDOI
23 Mar 1992
TL;DR: A supervised spectral mapping method for speaker adaptation based on a piecewise linear transformation of cepstral vectors is proposed, which improved recognition performance by 4% compared to the usual linear mapping method when using 100 training words, and also achieved a 3% higher rate on average than the D-map method.
Abstract: A supervised spectral mapping method for speaker adaptation based on a piecewise linear transformation of cepstral vectors is proposed. In this method, an input vector is mapped onto the target spectral space by a weighted sum of linearly transformed vectors using a set of mapping matrices which are associated with fuzzy partitioned spaces. These matrices were estimated so as to minimize the total mean square error between the mapped and target spectra. This method was compared with the difference interpolation mapping (D-map) method, which is an extension of the codebook mapping methods. Through 16 phoneme recognition tests using a single Gaussian distribution hidden Markov model (HMM), it was found that the proposed method with 16 fuzzy partitioned spaces improved recognition performance by 4% compared to the usual linear mapping method when using 100 training words, and also achieved a 3% higher rate on average than the D-map method. >