scispace - formally typeset
Open AccessPosted Content

The Adjustment of Prediction Intervals to Account for Errors in Parameter Estimation

Reads0
Chats0
TLDR
In this article, the authors measure the closeness of the coverage probability, conditional on all of the data, of the adjusted PI and 1-a by measuring the mean square of the difference between this conditional coverage probability and the standard approximate PI.
Abstract
Standard approximate 1-a prediction intervals (PIs) need to be adjusted to take account of the error in estimating the parameters This adjustment may be aimed at setting the (unconditional) probability that the PI includes the value being predicted equal to 1-a Alternatively, this adjustment may be aimed at setting the probability that the PI includes the value being predicted equal to 1-a, conditional on an appropriate statistic T For an autoregressive process of order p, it has been suggested that T consist of the last p observations We provide a new criterion by which both forms of adjustment can be compared on an equal footing This new criterion of performance is the closeness of the coverage probability, conditional on all of the data, of the adjusted PI and 1-a In this paper, we measure this closeness by the mean square of the difference between this conditional coverage probability and 1-a We illustrate the application of this new criterion to a Gaussian zero-mean autoregressive process of order 1-a and one-step-ahead prediction For this example, this comparison shows that the adjustment which is aimed at setting the coverage probability equal to 1-a conditional on the last observation is the better of the two adjustments

read more

Citations
More filters
Journal ArticleDOI

Improved Prediction Limits For AR(p) and ARCH(p) Processes

TL;DR: In this paper, a simulation-based prediction limit that improves on any given estimative d-step-ahead prediction limit for a Markov process is described, which is ideally suited to those Markov processes for which the algebraic manipulations required for the latter improved prediction limits are very complicated.
Journal ArticleDOI

A simple procedure for computing improved prediction intervals for autoregressive models

TL;DR: In this paper, the authors extend to Markov process models a recent result by Vidoni, which defines a relatively simple predictive distribution function, giving improved prediction limits as quantiles.
Journal ArticleDOI

Approximation methods for multiple period Value at Risk and Expected Shortfall prediction

TL;DR: In this paper, the authors proposed an iterating approach to predict multiple period Value at Risk and Expected Shortfall based on the so-called iterative approach, where the properties of the conditional distribu...
Journal ArticleDOI

The adjustment of prediction intervals to account for errors in parameter estimation

TL;DR: In this paper, the authors measure the closeness of the coverage probability, conditional on all of the data, of the adjusted PI and 1−−-α, by the mean square of the difference between this conditional coverage probability and 1 −−α.
Journal Article

Bootstrap prediction intervals for autoregression

TL;DR: In this paper, the nonparametric bootstrap is applied to the problem of prediction in autoregression, where an alternative representation for AR(p) series is used, allowing for bootstrap replicates generated backward in time.
References
More filters
Journal ArticleDOI

Bootstrap Prediction Intervals for Autoregression

TL;DR: In this article, the nonparametric bootstrap is applied to the problem of prediction in autoregression, where an alternative representation for AR(p) series is used, allowing for bootstrap replicates generated backward in time.
Journal ArticleDOI

Estimating Properties of Autoregressive Forecasts

TL;DR: In this article, a second-order Taylor expansion is used to estimate the mean squared error (MSE) of a Gaussian-based mean-squared error estimator.
Journal ArticleDOI

Properties of Predictors for Autoregressive Time Series

TL;DR: In this paper, the prediction of the (n + s)th observation of the pth order autoregressive process is investigated and the mean square of the predictor error through terms of order n −1, conditional on Yn, Y n 1, 1, ε, δ n − p + 1, is obtained for the stationary normal process.
Journal ArticleDOI

The sampling distribution of forecasts from a first-order autoregression

TL;DR: In this paper, the conditional distribution of forecast errors given the final period observation is skewed towards the origin and this skewness is accentuated in the majority of cases by the statistical dependence between the parameter estimates and the tinal period observation.
Journal ArticleDOI

On bootstrap predictive inference for autoregressive processes

TL;DR: This paper considers bootstrap-based predictive inference for autoregressive processes of order p, both unconditional inference and inference conditional on the last p observed values, and points out the best way to apply the bootstrap to unconditional predictive inference when the process is Gaussian.
Related Papers (5)