scispace - formally typeset
Search or ask a question
Topic

Code-excited linear prediction

About: Code-excited linear prediction is a research topic. Over the lifetime, 2025 publications have been published within this topic receiving 28633 citations. The topic is also known as: CELP.


Papers
More filters
Proceedings ArticleDOI
01 Sep 2015
TL;DR: The serial problem in atermarking and then encoding with the CELP codec was thereby reduced by using the proposed method which also ncreased the bit detection rate.
Abstract: This paper proposes the unification of the codeexcited linear prediction (CELP) codec process with watermarking based on formant tuning. The serial problem in atermarking and then encoding with the CELP codec was thereby reduced by using the proposed method which also ncreased the bit detection rate. We took advantage of two key properties: I) humans do not perceive alterations applied to formants and II) CELP and watermarking based on formant tuning methods utilize lineal prediction coefficients. We investigated the inaudibility and robustness of the proposed method by carrying out three different experiments using log-spectrum distance (LSD), the perceptual evaluation of speech quality (PESQ) and the bit detection rate (BDR). The results indicated that the proposed method satisfied the inaudibility requirement when watermarking was applied to the CELP codec, which increased the watermarking detection rate.

1 citations

Proceedings Article
01 Jan 2003
TL;DR: An efficient rate selection algorithm that can be used to transcode speech encoded by any code excited linear prediction (CELP)-type codec into a format compatible with selectable mode vocoder via direct parameter transformation is proposed.
Abstract: In this paper, we propose an efficient rate selection algorithm that can be used to transcode speech encoded by any code excited linear prediction (CELP)-type codec into a format compatible with selectable mode vocoder (SMV) via direct parameter transformation. The proposed algorithm performs rate selection using the CELP parameters. Simulation results show that while maintaining similar overall bit-rate compared to the rate selection algorithm of SMV, the proposed algorithm requires less computational load than that of SMV and does not degrade the quality of the transcoded speech.

1 citations

Proceedings ArticleDOI
01 Jun 1990
TL;DR: In designing the codebook, while the LBG method of clustering failed to converge, this paper has succeeded in finding a deterministic codebook based on a training set using the method of successive clustering.
Abstract: This paper presents a two-dimensional code excited linear prediction (CELP) method for image coding. This method is a two-dimensional extension of the CELP systems commonly used for speech coding. The decoder is identical to a conventional DPCM decoder. However, at the encoder, the input images are first decomposed into disjoint blocks. A single codeword from a table of N codewords is used to represent the vector of quantized residuals for each block. The encoder selects the appropriate codeword by reconstructing N versions of the current block, using each of the N vectors of the codebook. The index of the codeword giving the least distortion is then transmitted. In designing the codebook, while the LBG method of clustering failed to converge, we have succeeded in finding a deterministic codebook based on a training set using the method of successive clustering. The system has been extended by using adaptive prediction, where one of K possible prediction filters is used for each block; the encoder chooses the prediction filter that results in the least mean squared prediction error. An index is transmitted to the decoder indicating which prediction filter has been used. With no additional overhead, K different codebooks can be used, corresponding to each of the prediction filters. We have tested this system using five predictors. The five predictors were initially selected to give good performance on different types of image material, e.g. edges of different orientation, and then refined by minimizing the mean square prediction error on those pixels for which the initial predictor gave the lowest mean square error.

1 citations

Journal ArticleDOI
C. H. Kwon1, Chong Kwan Un1
TL;DR: A new adaptive source is proposed in which samples of the source have different gains according to their amplitudes by a two-tap pitch predictor and results show that peaky pulses at voiced onset and a burst of plosive sound are clearly reconstructed and that in voiced sound the excitation has the desirable peaky pulse characteristic and the pitch periodicity is well reproduced.

1 citations

Proceedings ArticleDOI
16 Oct 2006
TL;DR: SNR is introduced by three different methods which are the weighted L-S recursive filter, the finite memory recursive filter and the BP neural network, respectively to estimate SNR so that the gain predictor can be separately optimized with the quantizer.
Abstract: The recommendation G.728 depends on the Levinson-Durbin algorithm to update gain filter coefficients. In this paper, it is introduced by three diferent methods which are the weighted L-S recursive filter, the finite memory recursive filter and the BP neural network, respectively. Because quantizer has not existed at optimizing gain filter, the quantization SNR can not be used to evaluate its performance. This paper proposes a scheme to estimate SNR so that the gain predictor can be separately optimized with the quantizer. Using these three gain filter the speech coding results are all better than the G.728. The weighted L-S algorithm has the best effect. Its average segment SNR is higher than the G.728 about 0.76dB. It is also used to evaluate the case that excitation vector is 16 and 20 samples respectively; the weighted L-S algorithm has similarly the best result.

1 citations


Network Information
Related Topics (5)
Decoding methods
65.7K papers, 900K citations
83% related
Data compression
43.6K papers, 756.5K citations
83% related
Signal processing
73.4K papers, 983.5K citations
83% related
Feature vector
48.8K papers, 954.4K citations
80% related
Feature extraction
111.8K papers, 2.1M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20226
20213
20207
201915
201810
201713