scispace - formally typeset
Search or ask a question
Topic

Code-excited linear prediction

About: Code-excited linear prediction is a research topic. Over the lifetime, 2025 publications have been published within this topic receiving 28633 citations. The topic is also known as: CELP.


Papers
More filters
Journal ArticleDOI
TL;DR: Results show a saving of at least 2 b/frame for unvoiced spectra compared to voiced spectra to achieve the same spectral distortion performance, leading to some interesting observations on the role of the analysis-by-synthesis structure of CELP.
Abstract: Phonetic classification of speech frames allows distinctive quantization and bit allocation schemes suited to the particular class. Separate quantization of the linear predictive coding (LPC) parameters for voiced and unvoiced speech frames is shown to offer useful gains for representing the synthesis filter commonly used in code-excited linear prediction (CELP) and other coders. Subjective test results are reported that determine the required bit rate and accuracy in the two classes of voiced and unvoiced LPC spectra for CELP coding with phonetic classification. It was found, in this context, that unvoiced spectra need 9 b/frame or more whereas voiced spectra need 25 b/frame or more with the quantization schemes used. New spectral distortion criteria needed to assure transparent LPC spectral quantization for each voicing class in CELP coders are presented. Similar subjective test results for speech synthesized from the true residual signal are also presented, leading to some interesting observations on the role of the analysis-by-synthesis structure of CELP. Objective performance assessments based on the spectral distortion measure are also presented. The theoretical distortion-rate function for the spectral distortion measure is estimated for voiced and unvoiced LPC parameters and compared with experimental results obtained with unstructured vector quantization (VQ). These results show a saving of at least 2 b/frame for unvoiced spectra compared to voiced spectra to achieve the same spectral distortion performance.

25 citations

Journal ArticleDOI
TL;DR: The MFCELP method provides a significant visual improvement over the discrete cosine transform based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet wavelet (EZW) coding method, and the vector tree (VT) codingmethod, as well as the multispectral segmented autoregressive moving average (MSARMA) method the authors developed previously.
Abstract: This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256/spl times/256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

25 citations

Patent
29 Dec 1997
TL;DR: In this paper, a multimodal code-excited linear prediction (CELP) speech coder determines a pitch-lag-periodicity-independent peakiness measure from the input speech.
Abstract: A multimodal code-excited linear prediction (CELP) speech coder determines a pitch-lag-periodicity-independent peakiness measure from the input speech. If the measure is greater than a peakiness threshold the encoder classifies the speech in a first coding mode. In one embodiment only frames having an open-loop pitch prediction gain not greater than a threshold, a zero-crossing rate not less than a threshold, and a peakiness measure not greater than the peakiness threshold will be classified as unvoiced speech. Accordingly, the beginning or end of a voiced utterance will be properly coded as voiced speech and speech quality improved. In another embodiment, gain-match scaling matches coded speech energy to input speech energy. A target vector (the portion of input speech with any effects of previous signals removed) is approximated using the precomputed gain for excitation vectors while minimizing perceptually-weighted error. The correct gain value is perceptually more important than the shape of the excitation vector for most unvoiced signals.

25 citations

Journal Article
TL;DR: In this paper, the authors improved the noise shaping of CELP using a more modern psychoacoustic model, which has the significant advantage of improving the quality of an existing codec without the need to change the bit-stream.
Abstract: One key aspect of the CELP algorithm is that it shapes the coding noise using a simple, yet effective, weighting filter. In this paper, we improve the noise shaping of CELP using a more modern psychoacoustic model. This has the significant advantage of improving the quality of an existing codec without the need to change the bit-stream. More specifically, we improve the Speex CELP codec by using the psychoacoustic model used in the Vorbis audio codec. The results show a significant increase in quality, especially at high bit-rates, where the improvement is equivalent to a 20% reduction in bit-rate. The technique itself is not specific to Speex and could be applied to other CELP codecs.

25 citations

PatentDOI
TL;DR: There is provided a code excitation linear predictive coding or decoding apparatus in which a code vector, which is transmitted by a codebook such as a stochastic codebook, is converted adaptively in accordance with vocal tract analysis information (LPC) so that a high quality reproduction speech is obtained at a low coding rate.
Abstract: There is provided a code excitation linear predictive (CELP) coding or decoding apparatus in which a code vector, which is transmitted by a codebook such as a stochastic codebook, is converted adaptively in accordance with vocal tract analysis information (LPC) so that a high quality reproduction speech is obtained at a low coding rate. Further, in order to obtain a similar effect, a pulse-like excitation codebook formed of an isolated impulse is provided in addition to the adaptive excitation codebook and stochastic excitation codebook so that either the stochastic excitation codebook or the pulse-like excitation codebook is selectively used to provide a vocal tract parameter as a linear spectrum pair parameter.

25 citations


Network Information
Related Topics (5)
Decoding methods
65.7K papers, 900K citations
83% related
Data compression
43.6K papers, 756.5K citations
83% related
Signal processing
73.4K papers, 983.5K citations
83% related
Feature vector
48.8K papers, 954.4K citations
80% related
Feature extraction
111.8K papers, 2.1M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20226
20213
20207
201915
201810
201713