scispace - formally typeset
Search or ask a question
Topic

Code-excited linear prediction

About: Code-excited linear prediction is a research topic. Over the lifetime, 2025 publications have been published within this topic receiving 28633 citations. The topic is also known as: CELP.


Papers
More filters
Book ChapterDOI
01 Jan 1993
TL;DR: Most linear prediction speech codecs (coder-decoders) employ a fixed frame duration for linear prediction computation, which is a compromise between the rate of spectrum variation of the speech signal and the transmission requirements of the LPC information.
Abstract: Most linear prediction speech codecs (coder-decoders) employ a fixed frame duration for linear prediction computation, which is a compromise between the rate of spectrum variation of the speech signal and the transmission requirements of the LPC information. The LPC residual signal is coded by considering subframes significantly shorter than the LPC frame. Such subframe-based excitation computations are motivated by considerations of computational complexity. Global search through the space of excitations for the entire frame does not increase coding delay, but requires computational resources beyond those available on one or two chips today.

3 citations

Patent
Dmitry V. Shmunk1, Dmitry Rusanov1
07 Sep 2012
TL;DR: In this paper, a method for achieving bitstream scalability in a multi-channel audio encoder is presented, which comprises receiving audio input data; organizing said input data by a Code Excited Linear Predictor (CELP) processing module for further encoding by arranging said data according to significance of data, where more significant data is placed ahead of less significant data; and providing a scalable output bitstream.
Abstract: The present invention provides for methods and apparatuses for processing audio data. In one embodiment, there is a provided a method for achieving bitstream scalability in a multi-channel audio encoder, said method comprising receiving audio input data; organizing said input data by a Code Excited Linear Predictor (CELP) processing module for further encoding by arranging said data according to significance of data, where more significant data is placed ahead of less significant data; and providing a scalable output bitstream. The organized CELP data comprises of a first part and a second part. The first part comprises a frame header, sub frame parameters and innovation vector quantization data from the first frame from all channels. The innovation vector quantization data from the first frames from all channels is arranged according to channel number.

3 citations

Proceedings ArticleDOI
13 May 2002
TL;DR: A Karhunen-Loève Transform (KLT)-based classified VQ (CVQ), where the space-filling advantage can be utilized since the Voronoi-region shape is not affected by the KLT, and has a computational complexity similar to DVQ and much lower than CELP.
Abstract: If the signal statistics are given, direct vector quantization (DVQ) according to these statistics provides the highest coding efficiency, but requires unmanageable storage requirements. In. code-excited linear predictive (CELP) coding. a single “compromise” codebook is trained in the prediction residual-domain and the space-filling and shape advantages of vector quantization (VQ) are utilized in a non-optimal, average sense. In this paper. we propose a Karhunen-Loeve Transform (KLT)-based classified VQ (CVQ), where the space-filling advantage can be utilized since the Voronoi-region shape is not affected by the KLT. The memory and shape advantages can be also used, since each codebook is designed based on a narrow class of KL T -domain statistics. Our experiments show that the KLT-CVQ provides a higher SNR than CELP and (single-codebook) DVQ, and has a computational complexity similar to DVQ and much lower than CELP. Storage requirements are modest because of the energy concentration property of the KL T.

3 citations

Journal ArticleDOI
TL;DR: This correspondence presents a new two-stage adaptive vector quantizer of LSF parameters in LPC speech coding that offers transparent quantization with 22 b/frame.
Abstract: This correspondence presents a new two-stage adaptive vector quantizer of LSF parameters in LPC speech coding. The first codebook is adapted by a partition-delete operation, whereas the code-vectors of the second codebook remain unchanged. The objective and subjective evaluations show that the proposed scheme offers transparent quantization with 22 b/frame. >

3 citations

Proceedings ArticleDOI
06 Oct 2002
TL;DR: An optimization algorithm for the PSVQ framework is presented, and new memory quantization methods are derived from the framework, and the new methods are applied to spectrum quantization, and are shown to outperform previous methods.
Abstract: Memory quantization is studied in detail. We propose a framework, power series vector quantization (PSVQ), for analysis and development of memory quantizers. Furthermore, we present an optimization algorithm for the framework, and new memory quantization methods are derived from the framework. The new methods are applied to spectrum quantization, and are shown to outperform previous methods.

3 citations


Network Information
Related Topics (5)
Decoding methods
65.7K papers, 900K citations
83% related
Data compression
43.6K papers, 756.5K citations
83% related
Signal processing
73.4K papers, 983.5K citations
83% related
Feature vector
48.8K papers, 954.4K citations
80% related
Feature extraction
111.8K papers, 2.1M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20226
20213
20207
201915
201810
201713