scispace - formally typeset
Search or ask a question

Showing papers on "Consistent estimator published in 2004"


Journal ArticleDOI
TL;DR: This work focuses on the finite-sample behavior of heteroskedasticity-consistent covariance matrix estimators and associated quasi-t tests, and proposes a new estimator, which is tailored to take into account the effect of leverage points in the design matrix on associated quasi/t tests.

262 citations


Journal ArticleDOI
TL;DR: In this article, the local Whittle estimator in the nonstationary case (d> 1 ) was investigated and it was shown that the estimator converges in probability to unity with a polynomial trend of order α > 1.
Abstract: Asymptotic properties of the local Whittle estimator in the nonstationary case ( d> 1 ) are explored. For 1 1 and when the process has a polynomial trend of order α > 1 , the estimator is shown to be inconsistent and to converge in probability to unity.

253 citations


Journal ArticleDOI
TL;DR: In this article, the root n consistent estimator for nonlinear models with measurement errors in the explanatory variables, when one repeated observation of each mismeasured regressor is available, is presented.
Abstract: This paper presents a solution to an important econometric problem, namely the root n consistent estimation of nonlinear models with measurement errors in the explanatory variables, when one repeated observation of each mismeasured regressor is available. While a root n consistent estimator has been derived for polynomial specifications (see Hausman, Ichimura, Newey, and Powell (1991)), such an estimator for general nonlinear specifications has so far not been available. Using the additional information provided by the repeated observation, the suggested estimator separates the measurement error from the “true” value of the regressors thanks to a useful property of the Fourier transform: The Fourier transform converts the integral equations that relate the distribution of the unobserved “true” variables to the observed variables measured with error into algebraic equations. The solution to these equations yields enough information to identify arbitrary moments of the “true,” unobserved variables. The value of these moments can then be used to construct any estimator that can be written in terms of moments, including traditional linear and nonlinear least squares estimators, or general extremum estimators. The proposed estimator is shown to admit a representation in terms of an influence function, thus establishing its root n consistency and asymptotic normality. Monte Carlo evidence and an application to Engel curve estimation illustrate the usefulness of this new approach.

249 citations


Journal ArticleDOI
TL;DR: In this article, a class of estimators for the parameters of a generalized autoregressive conditional heteroscedastic (GARCH) sequence was proposed, which are consistent and asymptotically normal under mild conditions.
Abstract: We propose a class of estimators for the parameters of a GARCH(p, q) sequence. We show that our estimators are consistent and asymptotically normal under mild conditions. The quasi-maximum likelihood and the likelihood estimators are discussed in detail. We show that the maximum likelihood estimator is optimal. If the tail of the distribution of the innovations is polynomial, even a quasi-maximum likelihood estimator based on exponential density performs better than the standard normal density-based quasi-likelihood estimator of Lee and Hansen and Lumsdaine. 1. Introduction. The generalized autoregressive conditional heteroscedastic (GARCH) process was introduced by Bollerslev (1986). The GARCH process has received considerable attention from applied as well as from theoretical points of view. We say that {yk, −∞ < k < ∞} is a GARCH(p,q) process if it satisfies the equations

184 citations


Journal ArticleDOI
TL;DR: In this article, a simple and consistent estimation procedure for conditional moment restrictions is proposed, which is directly based on the definition of the conditional moments and does not require the selection of any user-chosen number.
Abstract: In econometrics, models stated as conditional moment restrictions are typically estimated by means of the generalized method of moments (GMM). The GMM estimation procedure can render inconsistent estimates since the number of arbitrarily chosen instruments is finite. In fact, consistency of the GMM estimators relies on additional assumptions that imply unclear restrictions on the data generating process. This article introduces a new, simple and consistent estimation procedure for these models that is directly based on the definition of the conditional moments. The main feature of our procedure is its simplicity, since its implementation does not require the selection of any user-chosen number, and statistical inference is straightforward since the proposed estimator is asymptotically normal. In addition, we suggest an asymptotically efficient estimator constructed by carrying out one Newton–Raphson step in the direction of the efficient GMM estimator.

166 citations


Journal ArticleDOI
TL;DR: In this paper, the authors established consistency and asymptotic normality of the quasi-maximum likelihood estimator in the linear ARCH model and allowed the parameters to be in the region where no stationary version of the process exists.
Abstract: We establish consistency and asymptotic normality of the quasi-maximum likelihood estimator in the linear ARCH model. Contrary to the existing literature, we allow the parameters to be in the region where no stationary version of the process exists. This implies that the estimator is always asymptotically normal.

149 citations


Book ChapterDOI
30 Dec 2004
TL;DR: In this paper, a series-type instrumental variable (IV) estimator of the parameters of a spatial first order autoregressive model with first order auto-regressive disturbances is proposed.
Abstract: The purpose of this paper is two-fold. First, on a theoretical level we introduce a series-type instrumental variable (IV) estimator of the parameters of a spatial first order autoregressive model with first order autoregressive disturbances. We demonstrate that our estimator is asymptotically efficient within the class of IV estimators, and has a lower computational count than an efficient IV estimator that was introduced by Lee (2003). Second, via Monte Carlo techniques we give small sample results relating to our suggested estimator, the maximum likelihood (ML) estimator, and other IV estimators suggested in the literature. Among other things we find that the ML estimator, both of the asymptotically efficient IV estimators, as well as an IV estimator introduced in Kelejian and Prucha (1998), have quite similar small sample properties. Our results also suggest the use of iterated versions of the IV estimators.

148 citations


Journal ArticleDOI
TL;DR: A class of weighted estimators with general time-varying weights that are related to a class of estimators proposed by Robins, Rotnitzky, and Zhao are developed and shown to be consistent and asymptotically normal under appropriate conditions.
Abstract: The case-cohort design is a common means of reducing the cost of covariate measurements in large failure-time studies. Under this design, complete covariate data are collected only on the cases (i. e., the subjects whose failure times are uncensored) and on a subcohort randomly selected from the whole cohort. In many applications, certain covariates are readily measured on all cohort members, and surrogate measurements of the expensive covariates also may be available. The existing relative-risk estimators for the case-cohort design disregard the covariate data collected outside the case-cohort sample and thus incur loss of efficiency. To make better use of the available data, we develop a class of weighted estimators with general time-varying weights that are related to a class of estimators proposed by Robins, Rotnitzky, and Zhao. The estimators are shown to be consistent and asymptotically normal under appropriate conditions. We identify the estimator within this class that maximizes efficiency, numeri...

146 citations


Journal ArticleDOI
TL;DR: A new a posteriori error estimator for lowest order conforming finite elements is presented and analyzed based on Raviart--Thomas finite elements and can be obtained locally by a postprocessing technique involving for each vertex a local subproblem associated with a dual mesh.
Abstract: We present and analyze a new a posteriori error estimator for lowest order conforming finite elements. It is based on Raviart--Thomas finite elements and can be obtained locally by a postprocessing technique involving for each vertex a local subproblem associated with a dual mesh. Under certain regularity assumptions on the right-hand side, we obtain an error estimator where the constant in the upper bound for the true error tends to one. Replacing the conforming finite element solution by a postprocessed one, the error estimator is asymptotically exact. The local equivalence between our estimator and the standard residual-based error estimator is established. Numerical results illustrate the performance of the error estimator.

140 citations


Journal ArticleDOI
01 Jun 2004-Genetics
TL;DR: Simulations of a Wright-Fisher population with known Ne show that theSummStat estimator is useful across a realistic range of individuals and loci sampled, generations between samples, and Ne values, and under the conditions simulated, SummStat confidence intervals were more conservative than the likelihood-based estimators and more likely to include true Ne.
Abstract: We describe and evaluate a new estimator of the effective population size (N(e)), a critical parameter in evolutionary and conservation biology. This new "SummStat" N(e) estimator is based upon the use of summary statistics in an approximate Bayesian computation framework to infer N(e). Simulations of a Wright-Fisher population with known N(e) show that the SummStat estimator is useful across a realistic range of individuals and loci sampled, generations between samples, and N(e) values. We also address the paucity of information about the relative performance of N(e) estimators by comparing the SummStat estimator to two recently developed likelihood-based estimators and a traditional moment-based estimator. The SummStat estimator is the least biased of the four estimators compared. In 32 of 36 parameter combinations investigated using initial allele frequencies drawn from a Dirichlet distribution, it has the lowest bias. The relative mean square error (RMSE) of the SummStat estimator was generally intermediate to the others. All of the estimators had RMSE > 1 when small samples (n = 20, five loci) were collected a generation apart. In contrast, when samples were separated by three or more generations and N(e) < or = 50, the SummStat and likelihood-based estimators all had greatly reduced RMSE. Under the conditions simulated, SummStat confidence intervals were more conservative than the likelihood-based estimators and more likely to include true N(e). The greatest strength of the SummStat estimator is its flexible structure. This flexibility allows it to incorporate any potentially informative summary statistic from population genetic data.

129 citations


Journal ArticleDOI
TL;DR: In this paper, the asymptotic properties of double-stage quantile regression estimators with random regressors are investigated. But the first stage is based on quantile regressions with the same quantile as in the second stage, which ensures robustness of the estimation procedure.
Abstract: We present the asymptotic properties of double-stage quantile regression estimators with random regressors, where the first stage is based on quantile regressions with the same quantile as in the second stage, which ensures robustness of the estimation procedure. We derive invariance properties with respect to the reformulation of the dependent variable. We propose a consistent estimator of the variance-covariance matrix of the new estimator. Finally, we investigate finite sample properties of this estimator by using Monte Carlo simulations.

Journal ArticleDOI
TL;DR: This work develops a new linear estimator designed to minimize the worst-case regret over all bounded data vectors and demonstrates through several examples that the minimax regret estimator can significantly increase the performance over the conventional least-squares estimator.
Abstract: We develop a new linear estimator for estimating an unknown parameter vector x in a linear model in the presence of bounded data uncertainties. The estimator is designed to minimize the worst-case regret over all bounded data vectors, namely, the worst-case difference between the mean-squared error (MSE) attainable using a linear estimator that does not know the true parameters x and the optimal MSE attained using a linear estimator that knows x. We demonstrate through several examples that the minimax regret estimator can significantly increase the performance over the conventional least-squares estimator, as well as several other least-squares alternatives.

Journal ArticleDOI
TL;DR: In this paper, a generalized method of moments (GOM) is used to adjust the naive estimator to be consistent and asymptotically normal, and the objective function of this procedure is shown to be interpretable as an indirect likelihood.
Abstract: This article presents an exposition and synthesis of the theory and some applications of the so-called indirect method of inference. These ideas have been exploited in the field of econometrics, but less so in other fields such as biostatistics and epidemiology. In the indirect method, statistical inference is based on an intermediate statistic, which typically follows an asymptotic normal distribution, but is not necessarily a consistent estimator of the parameter of interest. This intermediate statistic can be a naive estimator based on a convenient but misspecified model, a sample moment or a solution to an estimating equation. We review a procedure of indirect inference based on the generalized method of moments, which involves adjusting the naive estimator to be consistent and asymptotically normal. The objective function of this procedure is shown to be interpretable as an “indirect likelihood” based on the intermediate statistic. Many properties of the ordinary likelihood function can be extended to this indirect likelihood. This method is often more convenient computationally than maximum likelihood estimation when handling such model complexities as random effects and measurement error, for example, and it can also serve as a basis for robust inference and model selection, with less stringent assumptions on the data generating mechanism. Many familiar estimation techniques can be viewed as examples of this approach. We describe applications to measurement error, omitted covariates and recurrent events. A dataset concerning prevention of mammary tumors in rats is analyzed using a Poisson regression model with overdispersion. A second dataset from an epidemiological study is analyzed using a logistic regression model with mismeasured covariates. A third dataset of exam scores is used to illustrate robust covariance selection in graphical models.

Journal ArticleDOI
TL;DR: In this article, an L2-consistent subsampling estimator for the asymptotic covariance matrix of the sample variogram is derived and used to construct a test statistic.
Abstract: A common requirement for spatial modeling is the development of an appropriate correlation structure. Although the assumption of isotropy is often made for this structure, it is not always appropriate. A conventional practice when checking for isotropy is to informally assess plots of direction-specific sample (semi)variograms. Although a useful diagnostic, these graphical techniques are difficult to assess and open to interpretation. Formal alternatives to graphical diagnostics are valuable, but have been applied to a limited class of models. In this article we propose a formal approach to test for isotropy that is both objective and valid for a wide class of models. This approach, which is based on the asymptotic joint normality of the sample variogram, can be used to compare sample variograms in multiple directions. An L2-consistent subsampling estimator for the asymptotic covariance matrix of the sample variogram is derived and used to construct a test statistic. A subsampling approach and a limiting ...

Journal ArticleDOI
TL;DR: In this article, the bias properties of common estimators used in growth regressions derived from the Solow model were evaluated using Monte Carlo simulations, and the results suggest that using an OLS estimator applied to a single cross-section of variables averaged over time (the between estimator) performs best in terms of the extent of bias on each of the estimated coefficients.
Abstract: Using Monte Carlo simulations, this paper evaluates the bias properties of common estimators used in growth regressions derived from the Solow model. We explicitly allow for measurement error in the right-hand side variables, as well as country-specific effects that are correlated with the regressors. Our results suggest that using an OLS estimator applied to a single cross-section of variables averaged over time (the between estimator) performs best in terms of the extent of bias on each of the estimated coefficients. The fixed-effects estimator and the Arellano-Bond estimator greatly overstate the speed of convergence under a wide variety of assumptions concerning the type and extent of measurement error, while between understates it somewhat. Finally, fixed effects and Arellano-Bond bias towards zero the slope estimates on the human and physical capital accumulation variables.

Journal ArticleDOI
TL;DR: A uniform Cramer-Rao lower bound (UCRLB) on the total variance of any estimator of an unknown vector of parameters, with bias gradient matrix whose norm is bounded by a constant is developed.
Abstract: We develop a uniform Cramer-Rao lower bound (UCRLB) on the total variance of any estimator of an unknown vector of parameters, with bias gradient matrix whose norm is bounded by a constant. We consider both the Frobenius norm and the spectral norm of the bias gradient matrix, leading to two corresponding lower bounds. We then develop optimal estimators that achieve these lower bounds. In the case in which the measurements are related to the unknown parameters through a linear Gaussian model, Tikhonov regularization is shown to achieve the UCRLB when the Frobenius norm is considered, and the shrunken estimator is shown to achieve the UCRLB when the spectral norm is considered. For more general models, the penalized maximum likelihood (PML) estimator with a suitable penalizing function is shown to asymptotically achieve the UCRLB. To establish the asymptotic optimality of the PML estimator, we first develop the asymptotic mean and variance of the PML estimator for any choice of penalizing function satisfying certain regularity constraints and then derive a general condition on the penalizing function under which the resulting PML estimator asymptotically achieves the UCRLB. This then implies that from all linear and nonlinear estimators with bias gradient whose norm is bounded by a constant, the proposed PML estimator asymptotically results in the smallest possible variance.

Journal ArticleDOI
01 Sep 2004-Extremes
TL;DR: In this paper, the minimum density power divergence estimator (MDPDE) for the shape and scale parameters of the generalized Pareto distribution (GPD) is implemented.
Abstract: In this article we implement the minimum density power divergence estimator (MDPDE) for the shape and scale parameters of the generalized Pareto distribution (GPD) The MDPDE is indexed by a constant α ≥ 0 that controls the trade-off between robustness and efficiency As α increases, robustness increases and efficiency decreases For α = 0 the MDPDE is equivalent to the maximum likelihood estimator (MLE) We show that for α > 0 the MDPDE for the GPD has a bounded influence function For α < 02 the MDPDE maintains good asymptotic relative efficiencies, usually above 90% The results from a Monte Carlo study agree with these asymptotic calculations The MDPDE is asymptotically normally distributed if the shape parameter is less than (1 + α)/(2 + α), and estimators for standard errors are readily computed under this restriction We compare the MDPDE, MLE, Dupuis’ optimally-biased robust estimator (OBRE), and Peng and Welsh’s Medians estimator for the parameters The simulations indicate that the MLE has the highest efficiency under uncontaminated GPDs However, for the GPD contaminated with gross errors OBRE and MDPDE are more efficient than the MLE For all the simulated models that we studied the Medians estimator had poor performance

Journal ArticleDOI
TL;DR: In this article, the authors provide an overview of available recurrent events analysis methods and present an inverse probability of censoring weighted estimator for the regression parameters in the Andersen-Gill model that is commonly used for recurrent event analysis.
Abstract: Summary. Recurrent events models have had considerable attention recently. The majority of approaches show the consistency of parameter estimates under the assumption that censoring is independent of the recurrent events process of interest conditional on the covariates that are included in the model. We provide an overview of available recurrent events analysis methods and present an inverse probability of censoring weighted estimator for the regression parameters in the Andersen–Gill model that is commonly used for recurrent event analysis. This estimator remains consistent under informative censoring if the censoring mechanism is estimated consistently, and it generally improves on the naive estimator for the Andersen–Gill model in the case of independent censoring. We illustrate the bias of ad hoc estimators in the presence of informative censoring with a simulation study and provide a data analysis of recurrent lung exacerbations in cystic fibrosis patients when some patients are lost to follow-up.

Journal ArticleDOI
TL;DR: A consistent estimator is proposed, based on a proper correction of the ordinary least squares estimator, which is explicitly given in terms of the true value of the noise variance.
Abstract: A parameter estimation problem for ellipsoid fitting in the presence of measurement errors is considered. The ordinary least squares estimator is inconsistent, and due to the nonlinearity of the model, the orthogonal regression estimator is inconsistent as well, i.e., these estimators do not converge to the true value of the parameters, as the sample size tends to infinity. A consistent estimator is proposed, based on a proper correction of the ordinary least squares estimator. The correction is explicitly given in terms of the true value of the noise variance.

Journal ArticleDOI
TL;DR: In this article, the covariate distribution is decomposed into the product of a series of conditional distributions according to the overall missing-data patterns, and the conditional distributions are then represented in the general odds ratio form.
Abstract: Robustness of covariate modeling for the missing-covariate problem in parametric regression is studied under the missing-at-random assumption. For a simple missing-covariate pattern, nonparametric covariate model is proposed and is shown to yield a consistent and semiparametrically efficient estimator for the regression parameter. Total robustness is achieved in this situation. For more general missingcovariate patterns, a novel semiparametric modeling approach is proposed for the covariates. In this approach, the covariate distribution is first decomposed into the product of a series of conditional distributions according to the overall missing-data patterns, and the conditional distributions are then represented in the general odds ratio form. The general odds ratios are modeled parametrically, and the other components of the covariate distribution are modeled nonparametrically. Maximum semiparametric likelihood is used to find the parameter estimates. The proposed method yields a consistent estimator f...

01 Jan 2004
TL;DR: In this article, the authors proposed a ratio estimator under double sampling in presence of non-response when the population mean of the auxiliary variable is unknown and obtained the first phase sample size, second phase sample sample size and sUb-sampling fraction for the proposed estimator which minimize the survey cost for a specified precision.
Abstract: SUMMARY In this paper we have considered ratio estimator under double sampling in presence of non-response when the population mean of the auxiliary variable is unknown and obtained the first phase sample size, second phase sample size and sUb-sampling fraction for the proposed estimator which minimize the survey cost for a specified precision. The cost obtained for the proposed estimator is compared theoretically and numerically with that of the cost obtained by Hansen and Hurwitz estimator and found that survey cost for our proposed estimator is less than the cost obtained by Hansen and Hurwitz estimator.

Journal ArticleDOI
TL;DR: In this paper, a new class of generally applicable wavelet-based tests for serial correlation of unknown form in the estimated residuals of a panel regression model, where error components can be one-way or two-way, individual and time effects can be fixed or random, and regressors may contain lagged dependent variables or deterministic/stochastic trending variables.
Abstract: Wavelet analysis is a new mathematical method developed as a unified field of science over the last decade or so. As a spatially adaptive analytic tool, wavelets are useful for capturing serial correlation where the spectrum has peaks or kinks, as can arise from persistent dependence, seasonality, and other kinds of periodicity. This paper proposes a new class of generally applicable wavelet-based tests for serial correlation of unknown form in the estimated residuals of a panel regression model, where error components can be one-way or two-way, individual and time effects can be fixed or random, and regressors may contain lagged dependent variables or deterministic/stochastic trending variables. Our tests are applicable to unbalanced heterogenous panel data. They have a convenient null limit N(0,1) distribution. No formulation of an alternative model is required, and our tests are consistent against serial correlation of unknown form even in the presence of substantial in homogeneity in serial correlation across individuals. This is in contrast to existing serial correlation tests for panel models, which ignore inhomogeneity in serial correlation across individuals by assuming a common alternative, and thus have no power against the alternatives where the average of serial correlations among individuals is close to zero. We propose and justify a data-driven method to choose the smoothing parameter-the finest scale in wavelet spectral estimation, making the tests completely operational in practice. The data-driven finest scale automatically converges to zero under the null hypothesis of no serial correlation and diverges to infinity as the sample size increases under the alternative, ensuring the consistency of our tests. Simulation shows that our tests perform well in small and finite samples relative to some existing tests.

Journal ArticleDOI
TL;DR: In this paper, a new estimator of the Weibull tail-coefficient is presented, which is based on the log-spaces of the upper-order statistics.
Abstract: We present a new estimator of the Weibull tail-coefficient. The Weibull tail-coefficient is defined as the regular variation coefficient of the inverse cumulative hazard function. Our estimator is based on the log-spacings of the upper order statistics. Therefore, it is very similar to the Hill estimator for the extreme value index. We prove the weak consistency and the asymptotic normality of our estimator. Its asymptotic as well as its finite sample performances are compared to classical ones.

Journal Article
TL;DR: In this paper, a new chain type ratio to difference estimator has been proposed using the information on an auxiliary character in successive (rotation) sampling over two occasions, and the proposed estimator was compared with sample mean estimator when there is no matching and the optimum estimator, which is a combination of the means of the matched and unmatched portions of the sample at the second occasion.
Abstract: Use of auxiliary information for improving the precision of estimates is well known technique in sample surveys. In successive (rotation) sampling it is advantageous to use the information collected on previous occasions for improving the precision of estimates at current occasion. The previous information may be in form of an auxiliary character or the character under study itself or both. In the present work, a new chaintype ratio to difference estimator has been proposed using the information on an auxiliary character in successive (rotation) sampling over two occasions. The proposed estimator has been compared with sample mean estimator when there is no matching and the optimum estimator, which is a combination of the means of the matched and unmatched portions of the sample at the second occasion. Optimum replacement policy is also discussed. Results have been justified empirically.

Journal ArticleDOI
TL;DR: In this article, a new derivation for the well-known Good-Toulmin predictor as a moment-based estimator for the asymptotic limit of the discovery probability is presented.
Abstract: Consider a population comprising disjoint classes. An important problem arising from various fields is prediction of the random conditional probability of discovering a new class. The asymptotic normality of the discovery probability is established in a Poisson model, where the number of individuals from each class is a Poisson process with a class-specific rate. A new derivation is presented for the well-known Good–Toulmin predictor as a moment-based estimator for the asymptotic limit of the discovery probability. The Good–Toulmin predictor is also shown to be a nonparametric empirical Bayes estimator for the expectation of the discovery probability given the rates of the Poisson processes and an approximation to an unbiased estimator only for the identifiable part of the expectation of the discovery probability in a multinomial model. The properties of the moment-based estimator are investigated so that confidence and prediction intervals can be constructed. The Good–Toulmin predictor and the discovery ...

Journal ArticleDOI
TL;DR: This paper presents a new mobile station velocity estimator based on the first moment of the instantaneous frequency of the received signal that is robust to shadowing and proves that the performance of the IF-based estimator is superior to that of existing velocity estimators.
Abstract: This paper presents a new mobile station velocity estimator based on the first moment of the instantaneous frequency (IF) of the received signal. The effects of shadowing, additive noise, and scattering distribution on the proposed velocity estimator are analyzed. We show that, unlike velocity estimators based on the envelope and quadrature components of the received signal, the proposed estimator is robust to shadowing. We also prove that the performance of the IF-based estimator is only mildly affected by the presence of additive noise. Finally, by using simulations we show that the performance of the proposed IF-based estimator is superior to that of existing velocity estimators.

Journal ArticleDOI
TL;DR: In this article, the authors consider a class of consistent semi-parametric estimators of a positive tail index γ, parameterised in a tuning or control parameter α, which enables us to have access, for any available sample, to an estimator of the tail index with a null dominant component of asymptotic bias, and consequently with a reasonably flat mean squared error pattern.
Abstract: In this paper, we first consider a class of consistent semi-parametric estimators of a positive tail index γ, parameterised in a tuning or control parameter α. Such a control parameter enables us to have access, for any available sample, to an estimator of the tail index γ with a null dominant component of asymptotic bias, and consequently with a reasonably flat mean squared error pattern, as a function of k, the number of top-order statistics considered. Such a control parameter depends on a second-order parameter ρ, which will be adequately estimated so that we may achieve a high efficiency relative to the classical Hill estimator, provided we use a number of top-order statistics larger than the one usually required for the estimation through the Hill estimator. An illustration of the behaviour of the estimators is provided, through the analysis of the daily log-returns on the Euro–US$ exchange rates.

Journal ArticleDOI
Zhong Guan1
TL;DR: In this paper, a semiparametric changepoint model is considered and the empirical likelihood method is applied to detect the change from a distribution to a weighted distribution in a sequence of independent random variables.
Abstract: A semiparametric changepoint model is considered and the empirical likelihood method is applied to detect the change from a distribution to a weighted distribution in a sequence of independent random variables. The maximum likelihood changepoint estimator is shown to be consistent. The empirical likelihood ratio test statistic is proved to have the same limit null distribution as that with parametric models. A data-based test for the validity of the models is also proposed. Simulation shows the sensitivity and robustness of the semiparametric approach. The methods are applied to some classical datasets such as the Nile River data and stock price data.

Journal ArticleDOI
TL;DR: In this paper, a generalized method of moments (GMM) approach to the estimation of autoregressive roots near unity with panel data and incidental deterministic trends is proposed. But the GMM estimator has convergence rate n 1/6, slower than √ n, when the true localizing parameter is zero (i.e., when there is a panel unit root) and the deterministic trend in the panel are linear.
Abstract: This paper investigates a generalized method of moments (GMM) approach to the estimation of autoregressive roots near unity with panel data and incidental deterministic trends. Such models arise in empirical econometric studies of Þrm size and in dynamic panel data modeling with weak instruments. The two moment conditions in the GMM approach are obtained by constructing bias corrections to the score functions under OLS and GLS detrending, respectively. It is shown that the moment condition under GLS detrending corresponds to taking the projected score on the Bhattacharya basis, linking the approach to recent work on projected score methods for models with inÞnite numbers of nuisance parameters (Waterman and Lindsay, 1998). Assuming that the localizing parameter takes a nonpositive value, we establish consistency of the GMM estimator and Þnd its limiting distribution. A notable new Þnding is that the GMM estimator has convergence rate n 1/6 , slower than √ n, when the true localizing parameter is zero (i.e., when there is a panel unit root) and the deterministic trends in the panel are linear. These results, which rely on boundary point asymptotics, point to the continued difficulty of distinguishing unit roots from local alternatives, even when there is an inÞnity of additional data. JEL ClassiÞcation: C22 & C23

Journal ArticleDOI
TL;DR: In this paper, nonparametric kernel estimators of the semivariogram, under the assumption of isotropy, were proposed to obtain a consistent estimator, so that the selection of the bandwidth parameter is treated via the MSE or the MISE criteria.