scispace - formally typeset
Journal ArticleDOI

Linear prediction: A tutorial review

John Makhoul
- Vol. 63, Iss: 4, pp 561-580
Reads0
Chats0
TLDR
This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract
This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

read more

Citations
More filters
Journal ArticleDOI

Effect of growth hormone on human sleep energy.

TL;DR: The objective was to examine REM and delta sleep energy in adults with high and normal plasma growth hormone (GH) concentration by means of power spectrum EEC analysis.

Comparison of formant enhancement methods for HMM-based speech synthesis.

TL;DR: Experiments indicate that the formants enhancement prior to HMM training improves the quality of synthetic speech by providing sharper formants, and the performance of the new formant enhancement method is similar to the existing method.
Proceedings ArticleDOI

Speech Prediction Using an Adaptive Recurrent Neural Network with Application to Packet Loss Concealment

TL;DR: The proposed predictor is a single end-to-end network that captures all sorts of dependencies between samples, and therefore has the potential to outperform classicallinear/non-linear and short-termllong-term speech predictor structures.
Journal ArticleDOI

Two-sided filters for frame-based prediction

TL;DR: A linear prediction model, based on a two-sided predictor which predicts on the basis of past and future samples within a frame, is presented and showed at least 5-dB improvement over one-sided prediction in simulations on speech data.
Proceedings ArticleDOI

On a covariance-lattice algorithm for linear prediction

TL;DR: The paper presents a new formulation of the so-called "covariance-lattice" algorithm for linear predictive analysis that makes no explicit use of the predictor coefficients but works directly on the reflection coefficients, so being better suited to fixed-point arithmetic implementation.
References
More filters
Journal ArticleDOI

A new look at the statistical model identification

TL;DR: In this article, a new estimate minimum information theoretical criterion estimate (MAICE) is introduced for the purpose of statistical identification, which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure.
Proceedings Article

Information Theory and an Extention of the Maximum Likelihood Principle

H. Akaike
TL;DR: The classical maximum likelihood principle can be considered to be a method of asymptotic realization of an optimum estimate with respect to a very general information theoretic criterion to provide answers to many practical problems of statistical model fitting.
Book ChapterDOI

Information Theory and an Extension of the Maximum Likelihood Principle

TL;DR: In this paper, it is shown that the classical maximum likelihood principle can be considered to be a method of asymptotic realization of an optimum estimate with respect to a very general information theoretic criterion.
Journal ArticleDOI

Singular value decomposition and least squares solutions

TL;DR: The decomposition of A is called the singular value decomposition (SVD) and the diagonal elements of ∑ are the non-negative square roots of the eigenvalues of A T A; they are called singular values.