scispace - formally typeset

Journal ArticleDOI

The detection of observations possibly influential for model selection

01 Apr 1991-Statistics & Probability Letters (North-Holland)-Vol. 11, Iss: 4, pp 321-325

Abstract: Model selection can involve several variables and selection criteria. A simple method to detect observations possibly influential for model selection is proposed. The potentials of this method are illustrated with three examples, each of which is taken from related studies.
Citations
More filters

Posted Content
20 Feb 2006
Abstract: This paper contains a list of all publications over the period 1956-2005, as reported in the Rotterdam Econometric Institute Reprint series during 1957-2005.

84 citations


Journal ArticleDOI
R. H. Glendinning1Institutions (1)
TL;DR: This work focuses on four variants of the well known corner method derived from different estimating equations modified to deal with outlier contaminated data using robust analogues of the autocorrelation function, inverse autoc or correlation function, AR(∞) and MA(�irth) representations.
Abstract: We consider the problem of determining the order of an ARM A process from outlier contaminated data. We focus on four variants of the well known corner method derived from different estimating equations, see Choi (1992). These are modified to deal with outlier contaminated data using robust analogues of the autocorrelation function, inverse autocorrelation function, AR(∞) and MA(∞) representations. We evaluate our suggestions (which appear to be new) in a large scale numerical experiment where they out-perform their non-robust com¬petitors in outlier contaminated data. While there was no uniformly best robust procedure, our results support the use of the robust AR(∞) approach.

7 citations


Journal ArticleDOI
Philip Hans Franses1, Guido Biessen2Institutions (2)
Abstract: A common characteristic of diagnostic measures on influential observations is the assumption that all relevant regressors are included in the model, and that none of them can be deleted. We review and illustrate a method to detect data points which are influential enough to establish the empirical (in)significance of regressors.

3 citations


References
More filters

Journal ArticleDOI
Abstract: The history of the development of statistical hypothesis testing in time series analysis is reviewed briefly and it is pointed out that the hypothesis testing procedure is not adequately defined as the procedure for statistical model identification. The classical maximum likelihood estimation procedure is reviewed and a new estimate minimum information theoretical criterion (AIC) estimate (MAICE) which is designed for the purpose of statistical identification is introduced. When there are several competing models the MAICE is defined by the model and the maximum likelihood estimates of the parameters which give the minimum of AIC defined by AIC = (-2)log-(maximum likelihood) + 2(number of independently adjusted parameters within the model). MAICE provides a versatile procedure for statistical model identification which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure. The practical utility of MAICE in time series analysis is demonstrated with some numerical examples.

42,619 citations


Journal ArticleDOI
Abstract: The problem of selecting one of a number of models of different dimensions is treated by finding its Bayes solution, and evaluating the leading terms of its asymptotic expansion. These terms are a valid large-sample criterion beyond the Bayesian context, since they do not depend on the a priori distribution.

35,659 citations


01 Jan 2005
Abstract: The problem of selecting one of a number of models of different dimensions is treated by finding its Bayes solution, and evaluating the leading terms of its asymptotic expansion. These terms are a valid large-sample criterion beyond the Bayesian context, since they do not depend on the a priori distribution.

33,801 citations


Journal ArticleDOI
Abstract: Smoothing splines are well known to provide nice curves which smooth discrete, noisy data. We obtain a practical, effective method for estimating the optimum amount of smoothing from the data. Deri...

2,661 citations


Journal ArticleDOI
Grace Wahba1Institutions (1)
Abstract: Smoothing splines are well known to provide nice curves which smooth discrete, noisy data. We obtain a practical, effective method for estimating the optimum amount of smoothing from the data. Derivatives can be estimated from the data by differentiating the resulting (nearly) optimally smoothed spline. We consider the modely i (t i )+? i ,i=1, 2, ...,n,t i?[0, 1], whereg?W 2 (m) ={f:f,f?, ...,f (m?1) abs. cont.,f (m)??2[0,1]}, and the {? i } are random errors withE? i =0,E? i ? j =?2? ij . The error variance ?2 may be unknown. As an estimate ofg we take the solutiong n, ? to the problem: Findf?W 2 (m) to minimize $$\frac{1}{n}\sum\limits_{j = 1}^n {(f(t_j ) - y_j )^2 + \lambda \int\limits_0^1 {(f^{(m)} (u))^2 du} }$$ . The functiong n, ? is a smoothing polynomial spline of degree 2m?1. The parameter ? controls the tradeoff between the "roughness" of the solution, as measured by $$\int\limits_0^1 {[f^{(m)} (u)]^2 du}$$ , and the infidelity to the data as measured by $$\frac{1}{n}\sum\limits_{j = 1}^n {(f(t_j ) - y_j )^2 }$$ , and so governs the average square errorR(?; g)=R(?) defined by $$R(\lambda ) = \frac{1}{n}\sum\limits_{j = 1}^n {(g_{n,\lambda } (t_j ) - g(t_j ))^2 }$$ . We provide an estimate $$\hat \lambda$$ , called the generalized cross-validation estimate, for the minimizer ofR(?). The estimate $$\hat \lambda$$ is the minimizer ofV(?) defined by $$V(\lambda ) = \frac{1}{n}\parallel (I - A(\lambda ))y\parallel ^2 /\left[ {\frac{1}{n}{\text{Trace(}}I - A(\lambda ))} \right]^2$$ , wherey=(y 1, ...,y n)t andA(?) is then×n matrix satisfying(g n, ? (t 1), ...,g n, ? (t n))t=A (?) y. We prove that there exist a sequence of minimizers $$\tilde \lambda = \tilde \lambda (n)$$ ofEV(?), such that as the (regular) mesh{t i} i=1 n becomes finer, $$\mathop {\lim }\limits_{n \to \infty } ER(\tilde \lambda )/\mathop {\min }\limits_\lambda ER(\lambda ) \downarrow 1$$ . A Monte Carlo experiment with several smoothg's was tried withm=2,n=50 and several values of ?2, and typical values of $$R(\hat \lambda )/\mathop {\min }\limits_\lambda R(\lambda )$$ were found to be in the range 1.01---1.4. The derivativeg? ofg can be estimated by $$g'_{n,\hat \lambda } (t)$$ . In the Monte Carlo examples tried, the minimizer of $$R_D (\lambda ) = \frac{1}{n}\sum\limits_{j = 1}^n {(g'_{n,\lambda } (t_j ) - } g'(t_j ))$$ tended to be close to the minimizer ofR(?), so that $$\hat \lambda$$ was also a good value of the smoothing parameter for estimating the derivative.

1,725 citations


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
20061
19981
19921