scispace - formally typeset
Search or ask a question
Author

Peter J. Brockwell

Other affiliations: Kuwait University, RMIT University, Columbia University  ...read more
Bio: Peter J. Brockwell is an academic researcher from Colorado State University. The author has contributed to research in topics: Autoregressive model & Autoregressive–moving-average model. The author has an hindex of 35, co-authored 93 publications receiving 16781 citations. Previous affiliations of Peter J. Brockwell include Kuwait University & RMIT University.


Papers
More filters
Book
19 Aug 2009
TL;DR: In this article, the mean and autocovariance functions of ARIMA models are estimated for multivariate time series and state-space models, and the spectral representation of the spectrum of a Stationary Process is inferred.
Abstract: 1 Stationary Time Series.- 2 Hilbert Spaces.- 3 Stationary ARMA Processes.- 4 The Spectral Representation of a Stationary Process.- 5 Prediction of Stationary Processes.- 6* Asymptotic Theory.- 7 Estimation of the Mean and the Autocovariance Function.- 8 Estimation for ARMA Models.- 9 Model Building and Forecasting with ARIMA Processes.- 10 Inference for the Spectrum of a Stationary Process.- 11 Multivariate Time Series.- 12 State-Space Models and the Kalman Recursions.- 13 Further Topics.- Appendix: Data Sets.

5,260 citations

Journal ArticleDOI
TL;DR: A general approach to Time Series Modelling and ModeLLing with ARMA Processes, which describes the development of a Stationary Process in Terms of Infinitely Many Past Values and the Autocorrelation Function.
Abstract: Preface 1 INTRODUCTION 1.1 Examples of Time Series 1.2 Objectives of Time Series Analysis 1.3 Some Simple Time Series Models 1.3.3 A General Approach to Time Series Modelling 1.4 Stationary Models and the Autocorrelation Function 1.4.1 The Sample Autocorrelation Function 1.4.2 A Model for the Lake Huron Data 1.5 Estimation and Elimination of Trend and Seasonal Components 1.5.1 Estimation and Elimination of Trend in the Absence of Seasonality 1.5.2 Estimation and Elimination of Both Trend and Seasonality 1.6 Testing the Estimated Noise Sequence 1.7 Problems 2 STATIONARY PROCESSES 2.1 Basic Properties 2.2 Linear Processes 2.3 Introduction to ARMA Processes 2.4 Properties of the Sample Mean and Autocorrelation Function 2.4.2 Estimation of $\gamma(\cdot)$ and $\rho(\cdot)$ 2.5 Forecasting Stationary Time Series 2.5.3 Prediction of a Stationary Process in Terms of Infinitely Many Past Values 2.6 The Wold Decomposition 1.7 Problems 3 ARMA MODELS 3.1 ARMA($p,q$) Processes 3.2 The ACF and PACF of an ARMA$(p,q)$ Process 3.2.1 Calculation of the ACVF 3.2.2 The Autocorrelation Function 3.2.3 The Partial Autocorrelation Function 3.3 Forecasting ARMA Processes 1.7 Problems 4 SPECTRAL ANALYSIS 4.1 Spectral Densities 4.2 The Periodogram 4.3 Time-Invariant Linear Filters 4.4 The Spectral Density of an ARMA Process 1.7 Problems 5 MODELLING AND PREDICTION WITH ARMA PROCESSES 5.1 Preliminary Estimation 5.1.1 Yule-Walker Estimation 5.1.3 The Innovations Algorithm 5.1.4 The Hannan-Rissanen Algorithm 5.2 Maximum Likelihood Estimation 5.3 Diagnostic Checking 5.3.1 The Graph of $\t=1,\ldots,n\ 5.3.2 The Sample ACF of the Residuals

3,732 citations

Book
01 Jan 1996
TL;DR: In this paper, the authors present a general approach to time series analysis based on simple time series models and the Autocorrelation Function (AFF) and the Wold Decomposition.
Abstract: Preface 1 INTRODUCTION 1.1 Examples of Time Series 1.2 Objectives of Time Series Analysis 1.3 Some Simple Time Series Models 1.3.3 A General Approach to Time Series Modelling 1.4 Stationary Models and the Autocorrelation Function 1.4.1 The Sample Autocorrelation Function 1.4.2 A Model for the Lake Huron Data 1.5 Estimation and Elimination of Trend and Seasonal Components 1.5.1 Estimation and Elimination of Trend in the Absence of Seasonality 1.5.2 Estimation and Elimination of Both Trend and Seasonality 1.6 Testing the Estimated Noise Sequence 1.7 Problems 2 STATIONARY PROCESSES 2.1 Basic Properties 2.2 Linear Processes 2.3 Introduction to ARMA Processes 2.4 Properties of the Sample Mean and Autocorrelation Function 2.4.2 Estimation of $\gamma(\cdot)$ and $\rho(\cdot)$ 2.5 Forecasting Stationary Time Series 2.5.3 Prediction of a Stationary Process in Terms of Infinitely Many Past Values 2.6 The Wold Decomposition 1.7 Problems 3 ARMA MODELS 3.1 ARMA($p,q$) Processes 3.2 The ACF and PACF of an ARMA$(p,q)$ Process 3.2.1 Calculation of the ACVF 3.2.2 The Autocorrelation Function 3.2.3 The Partial Autocorrelation Function 3.3 Forecasting ARMA Processes 1.7 Problems 4 SPECTRAL ANALYSIS 4.1 Spectral Densities 4.2 The Periodogram 4.3 Time-Invariant Linear Filters 4.4 The Spectral Density of an ARMA Process 1.7 Problems 5 MODELLING AND PREDICTION WITH ARMA PROCESSES 5.1 Preliminary Estimation 5.1.1 Yule-Walker Estimation 5.1.3 The Innovations Algorithm 5.1.4 The Hannan-Rissanen Algorithm 5.2 Maximum Likelihood Estimation 5.3 Diagnostic Checking 5.3.1 The Graph of $\t=1,\ldots,n\ 5.3.2 The Sample ACF of the Residuals

3,126 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, a new method for analysing nonlinear and nonstationary data has been developed, which is the key part of the method is the empirical mode decomposition method with which any complicated data set can be decoded.
Abstract: A new method for analysing nonlinear and non-stationary data has been developed. The key part of the method is the empirical mode decomposition method with which any complicated data set can be dec...

18,956 citations

Posted Content
TL;DR: The authors describes the advantages of these studies and suggests how they can be improved and also provides aids in judging the validity of inferences they draw, such as multiple treatment and comparison groups and multiple pre- or post-intervention observations.
Abstract: Using research designs patterned after randomized experiments, many recent economic studies examine outcome measures for treatment groups and comparison groups that are not randomly assigned. By using variation in explanatory variables generated by changes in state laws, government draft mechanisms, or other means, these studies obtain variation that is readily examined and is plausibly exogenous. This paper describes the advantages of these studies and suggests how they can be improved. It also provides aids in judging the validity of inferences they draw. Design complications such as multiple treatment and comparison groups and multiple pre- or post-intervention observations are advocated.

7,222 citations

Journal ArticleDOI

6,278 citations

Journal ArticleDOI
TL;DR: A protocol for data exploration is provided; current tools to detect outliers, heterogeneity of variance, collinearity, dependence of observations, problems with interactions, double zeros in multivariate analysis, zero inflation in generalized linear modelling, and the correct type of relationships between dependent and independent variables are discussed; and advice on how to address these problems when they arise is provided.
Abstract: Summary 1. While teaching statistics to ecologists, the lead authors of this paper have noticed common statistical problems. If a random sample of their work (including scientific papers) produced before doing these courses were selected, half would probably contain violations of the underlying assumptions of the statistical techniques employed. 2. Some violations have little impact on the results or ecological conclusions; yet others increase type I or type II errors, potentially resulting in wrong ecological conclusions. Most of these violations can be avoided by applying better data exploration. These problems are especially troublesome in applied ecology, where management and policy decisions are often at stake. 3. Here, we provide a protocol for data exploration; discuss current tools to detect outliers, heterogeneity of variance, collinearity, dependence of observations, problems with interactions, double zeros in multivariate analysis, zero inflation in generalized linear modelling, and the correct type of relationships between dependent and independent variables; and provide advice on how to address these problems when they arise. We also address misconceptions about normality, and provide advice on data transformations. 4. Data exploration avoids type I and type II errors, among other problems, thereby reducing the chance of making wrong ecological conclusions and poor recommendations. It is therefore essential for good quality management and policy based on statistical analyses.

5,894 citations

Journal ArticleDOI
TL;DR: In this article, a bias correction to the Akaike information criterion, called AICC, is derived for regression and autoregressive time series models, which is of particular use when the sample size is small, or when the number of fitted parameters is a moderate to large fraction of the sample sample size.
Abstract: SUMMARY A bias correction to the Akaike information criterion, AIC, is derived for regression and autoregressive time series models. The correction is of particular use when the sample size is small, or when the number of fitted parameters is a moderate to large fraction of the sample size. The corrected method, called AICC, is asymptotically efficient if the true model is infinite dimensional. Furthermore, when the true model is of finite dimension, AICC is found to provide better model order choices than any other asymptotically efficient method. Applications to nonstationary autoregressive and mixed autoregressive moving average time series models are also discussed.

5,867 citations