scispace - formally typeset
Search or ask a question
Author

David A. Pierce

Bio: David A. Pierce is an academic researcher from University of Missouri. The author has contributed to research in topics: Autoregressive model & Autoregressive integrated moving average. The author has an hindex of 2, co-authored 2 publications receiving 2310 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, it is shown that the residual autocorrelations are to a close approximation representable as a singular linear transformation of the auto-correlations of the errors so that they possess a singular normal distribution.
Abstract: Many statistical models, and in particular autoregressive-moving average time series models, can be regarded as means of transforming the data to white noise, that is, to an uncorrelated sequence of errors. If the parameters are known exactly, this random sequence can be computed directly from the observations; when this calculation is made with estimates substituted for the true parameter values, the resulting sequence is referred to as the "residuals," which can be regarded as estimates of the errors. If the appropriate model has been chosen, there will be zero autocorrelation in the errors. In checking adequacy of fit it is therefore logical to study the sample autocorrelation function of the residuals. For large samples the residuals from a correctly fitted model resemble very closely the true errors of the process; however, care is needed in interpreting the serial correlations of the residuals. It is shown here that the residual autocorrelations are to a close approximation representable as a singular linear transformation of the autocorrelations of the errors so that they possess a singular normal distribution. Failing to allow for this results in a tendency to overlook evidence of lack of fit. Tests of fit and diagnostic checks are devised which take these facts into account.

2,533 citations

01 Apr 1968
TL;DR: It is shown that to a close approximation the residuals from any moving average or mixed autoregressive - moving average process will be the same as those from a suitably chosen autore progressive process.
Abstract: : It is shown that to a close approximation the residuals from any moving average or mixed autoregressive - moving average process will be the same as those from a suitably chosen autoregressive process. The adequacy of this approximation is confirmed by empirical calculation. It follows from this that one need not consider separately these two classes of processes.

14 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the limit distributions of the estimator of p and of the regression t test are derived under the assumption that p = ± 1, where p is a fixed constant and t is a sequence of independent normal random variables.
Abstract: Let n observations Y 1, Y 2, ···, Y n be generated by the model Y t = pY t−1 + e t , where Y 0 is a fixed constant and {e t } t-1 n is a sequence of independent normal random variables with mean 0 and variance σ2. Properties of the regression estimator of p are obtained under the assumption that p = ±1. Representations for the limit distributions of the estimator of p and of the regression t test are derived. The estimator of p and the regression t test furnish methods of testing the hypothesis that p = 1.

23,509 citations

Book ChapterDOI
TL;DR: This paper provides a concise overview of time series analysis in the time and frequency domains with lots of references for further reading.
Abstract: Any series of observations ordered along a single dimension, such as time, may be thought of as a time series. The emphasis in time series analysis is on studying the dependence among observations at different points in time. What distinguishes time series analysis from general multivariate analysis is precisely the temporal order imposed on the observations. Many economic variables, such as GNP and its components, price indices, sales, and stock returns are observed over time. In addition to being interested in the contemporaneous relationships among such variables, we are often concerned with relationships between their current and past values, that is, relationships over time.

9,919 citations

Journal ArticleDOI
TL;DR: In this paper, the overall test for lack of fit in autoregressive-moving average models proposed by Box & Pierce (1970) is considered, and it is shown that a substantially improved approximation results from a simple modification of this test.
Abstract: SUMMARY The overall test for lack of fit in autoregressive-moving average models proposed by Box & Pierce (1970) is considered. It is shown that a substantially improved approximation results from a simple modification of this test. Some consideration is given to the power of such tests and their robustness when the innovations are nonnormal. Similar modifications in the overall tests used for transfer function-noise models are proposed.

6,008 citations

Journal ArticleDOI
TL;DR: The Lagrange multiplier (LM) statistic as mentioned in this paper is based on the maximum likelihood ratio (LR) procedure and is used to test the effect on the first order conditions for a maximum of the likelihood of imposing the hypothesis.
Abstract: Many econometric models are susceptible to analysis only by asymptotic techniques and there are three principles, based on asymptotic theory, for the construction of tests of parametric hypotheses. These are: (i) the Wald (W) test which relies on the asymptotic normality of parameter estimators, (ii) the maximum likelihood ratio (LR) procedure and (iii) the Lagrange multiplier (LM) method which tests the effect on the first order conditions for a maximum of the likelihood of imposing the hypothesis. In the econometric literature, most attention seems to have been centred on the first two principles. Familiar " t-tests " usually rely on the W principle for their validity while there have been a number of papers advocating and illustrating the use of the LR procedure. However, all three are equivalent in well-behaved problems in the sense that they give statistics with the same asymptotic distribution when the null hypothesis is true and have the same asymptotic power characteristics. Choice of any one principle must therefore be made by reference to other criteria such as small sample properties or computational convenience. In many situations the W test is attractive for this latter reason because it is constructed from the unrestricted estimates of the parameters and their estimated covariance matrix. The LM test is based on estimation with the hypothesis imposed as parametric restrictions so it seems reasonable that a choice between W or LM be based on the relative ease of estimation under the null and alternative hypotheses. Whenever it is easier to estimate the restricted model, the LM test will generally be more useful. It then provides applied researchers with a simple technique for assessing the adequacy of their particular specification. This paper has two aims. The first is to exposit the various forms of the LM statistic and to collect together some of the relevant research reported in the mathematical statistics literature. The second is to illustrate the construction of LM tests by considering a number of particular econometric specifications as examples. It will be found that in many instances the LM statistic can be computed by a regression using the residuals of the fitted model which, because of its simplicity, is itself estimated by OLS. The paper contains five sections. In Section 2, the LM statistic is outlined and some alternative versions of it are discussed. Section 3 gives the derivation of the statistic for

5,826 citations

Journal ArticleDOI
TL;DR: A general approach to Time Series Modelling and ModeLLing with ARMA Processes, which describes the development of a Stationary Process in Terms of Infinitely Many Past Values and the Autocorrelation Function.
Abstract: Preface 1 INTRODUCTION 1.1 Examples of Time Series 1.2 Objectives of Time Series Analysis 1.3 Some Simple Time Series Models 1.3.3 A General Approach to Time Series Modelling 1.4 Stationary Models and the Autocorrelation Function 1.4.1 The Sample Autocorrelation Function 1.4.2 A Model for the Lake Huron Data 1.5 Estimation and Elimination of Trend and Seasonal Components 1.5.1 Estimation and Elimination of Trend in the Absence of Seasonality 1.5.2 Estimation and Elimination of Both Trend and Seasonality 1.6 Testing the Estimated Noise Sequence 1.7 Problems 2 STATIONARY PROCESSES 2.1 Basic Properties 2.2 Linear Processes 2.3 Introduction to ARMA Processes 2.4 Properties of the Sample Mean and Autocorrelation Function 2.4.2 Estimation of $\gamma(\cdot)$ and $\rho(\cdot)$ 2.5 Forecasting Stationary Time Series 2.5.3 Prediction of a Stationary Process in Terms of Infinitely Many Past Values 2.6 The Wold Decomposition 1.7 Problems 3 ARMA MODELS 3.1 ARMA($p,q$) Processes 3.2 The ACF and PACF of an ARMA$(p,q)$ Process 3.2.1 Calculation of the ACVF 3.2.2 The Autocorrelation Function 3.2.3 The Partial Autocorrelation Function 3.3 Forecasting ARMA Processes 1.7 Problems 4 SPECTRAL ANALYSIS 4.1 Spectral Densities 4.2 The Periodogram 4.3 Time-Invariant Linear Filters 4.4 The Spectral Density of an ARMA Process 1.7 Problems 5 MODELLING AND PREDICTION WITH ARMA PROCESSES 5.1 Preliminary Estimation 5.1.1 Yule-Walker Estimation 5.1.3 The Innovations Algorithm 5.1.4 The Hannan-Rissanen Algorithm 5.2 Maximum Likelihood Estimation 5.3 Diagnostic Checking 5.3.1 The Graph of $\t=1,\ldots,n\ 5.3.2 The Sample ACF of the Residuals

3,732 citations