scispace - formally typeset
Search or ask a question
Topic

Cepstrum

About: Cepstrum is a research topic. Over the lifetime, 3346 publications have been published within this topic receiving 55742 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The spectral estimation problem of a stationary autoregressive moving average (ARMA) process is considered, and a new method for the estimation of the MA part is proposed that requires neither any initial estimates nor fitting of a large order AR model.
Abstract: In this letter, the spectral estimation problem of a stationary autoregressive moving average (ARMA) process is considered, and a new method for the estimation of the MA part is proposed. A simple recursion relating the ARMA parameters and the cepstral coefficients of an ARMA process is derived and utilized for the estimation of the MA parameters. The method requires neither any initial estimates nor fitting of a large order AR model, both of which require further a priori knowledge of the signal and increase the computational complexity. Simulation results illustrating the performance of the new method are also given.

29 citations

Proceedings Article
01 Jan 2002
TL;DR: This work is mainly focused on showing experimental results using a combination of two methods for noise compensation which are shown to be complementary: classical spectral subtraction algorithm and histogram equalization.
Abstract: This work is mainly focused on showing experimental results using a combination of two methods for noise compensation which are shown to be complementary: classical spectral subtraction algorithm and histogram equalization. While spectral subtraction is focused on the reduction of the additive noise in the spectral domain, histogram equalization is applied in the cepstral domain to compensate the remaining non-linear effects associated to channel distortion and additive noise. The estimation of the noise spectrum for the spectral subtraction method relies on a new algorithm for speech / non-speech detection (SND) based on order statistics. This SND classification is also used for dropping long speech pauses. Results on Aurora 2 and Aurora 3 are reported.

29 citations

PatentDOI
TL;DR: In this paper, a method and apparatus estimate additive noise in a noisy signal using incremental Bayes learning, where a time-varying noise prior distribution is assumed and hyperparameters (mean and variance) are updated recursively using an approximation for posterior computed at the preceding time step.
Abstract: A method and apparatus estimate additive noise in a noisy signal using incremental Bayes learning, where a time-varying noise prior distribution is assumed and hyperparameters (mean and variance) are updated recursively using an approximation for posterior computed at the preceding time step. The additive noise in time domain is represented in the log-spectrum or cepstrum domain before applying incremental Bayes learning. The results of both the mean and variance estimates for the noise for each of separate frames are used to perform speech feature enhancement in the same log-spectrum or cepstrum domain.

29 citations

Journal ArticleDOI
TL;DR: Two effective means to improve the errors of PAC's are found; one is variable use of the PAC dimensions controlled by computation accuracy, and the other is smoothing along the time axis.
Abstract: Various parameter sets-including a spectrum envelope, cepstrum, autocorrelation function, linear predictive coefficients, and partial autocorrelation coefficients (PAC's)- are evaluated experimentally to determine which constitutes the best parameter in spoken digit recognition. The principle of recognition is simple pattern matching in the parameter space with nonlinear adjustment of the time axis. The spectrum envelope and cepstrum attain the best recognition score of 100 percent for ten spoken digits of a single-male speaker. PAC's seem to be preferable because of their ease of extraction and theoretical orthogonalities; however, these PAC's tend to suffer from computation errors when computed by fixed-point arithmetic with a short accumulator length. We find two effective means to improve the errors; one is variable use of the PAC dimensions controlled by computation accuracy, and the other is smoothing along the time axis. With these improvements the PAC's offer almost 100 percent recognition.

29 citations

Proceedings Article
01 Jan 2004
TL;DR: A new signal processing technique, “specmurt anasylis,” is proposed that provides piano-rolllike visual display of multi-tone signals (e.g., polyphonic music) using specmurt filreting instead of quefrency alanysis using cepstrum liftering.
Abstract: In this paper, we propose a new signal processing technique, “specmurt anasylis,” that provides piano-rolllike visual display of multi-tone signals (e.g., polyphonic music). Specmurt is defined as inverse Fourier transform of linear spectrum with logarithmic frequency, unlike familiar cepstrum defined as inverse Fourier transform of logarithmic spectrum with linear frequency. We apply this technique to music signals frencyque anasylis using specmurt filreting instead of quefrency alanysis using cepstrum liftering. Suppose that each sound contained in the multi-pitch signal has exactly the same harmonic structure pattern (i.e., the energy ratio of harmonic components), in logarithmic frequency domain the overall shape of the multi-pitch spectrum is a superposition of the common spectral patterns with different degrees of parallel shift. The overall shape can be expressed as a convolution of a fundamental frequency pattern (degrees of parallel shift and power) and the common harmonic structure pattern. The fundamental frequency pattern is restored by division of the inverse Fourier transform of a given log-frequency spectrum, i.e., specmurt, by that of the common harmonic structure pattern. The proposed method was successfully tested on several pieces of music recordings.

29 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
82% related
Robustness (computer science)
94.7K papers, 1.6M citations
80% related
Feature (computer vision)
128.2K papers, 1.7M citations
79% related
Deep learning
79.8K papers, 2.1M citations
79% related
Support vector machine
73.6K papers, 1.7M citations
78% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202386
2022206
202160
202096
2019135
2018130