scispace - formally typeset
Search or ask a question
Topic

Studentization

About: Studentization is a research topic. Over the lifetime, 126 publications have been published within this topic receiving 18750 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors discuss the problem of estimating the sampling distribution of a pre-specified random variable R(X, F) on the basis of the observed data x.
Abstract: We discuss the following problem given a random sample X = (X 1, X 2,…, X n) from an unknown probability distribution F, estimate the sampling distribution of some prespecified random variable R(X, F), on the basis of the observed data x. (Standard jackknife theory gives an approximate mean and variance in the case R(X, F) = \(\theta \left( {\hat F} \right) - \theta \left( F \right)\), θ some parameter of interest.) A general method, called the “bootstrap”, is introduced, and shown to work satisfactorily on a variety of estimation problems. The jackknife is shown to be a linear approximation method for the bootstrap. The exposition proceeds by a series of examples: variance of the sample median, error rates in a linear discriminant analysis, ratio estimation, estimating regression parameters, etc.

14,483 citations

Book
01 Jan 1992
TL;DR: In this paper, the authors present a non-Edgeworth view of the Bootstrap and propose a method of importance sampling for estimating bias, variance, and skewness.
Abstract: 1: Principles of Bootstrap Methodology.- 2: Principles of Edgeworth Expansion.- 3: An Edgeworth View of the Bootstrap.- 4: Bootstrap Curve Estimation.- 5: Details of Mathematical Rigour.- Appendix I: Number and Sizes of Atoms of Nonparametric Bootstrap Distribution.- Appendix II: Monte Carlo Simulation.- II.1 Introduction.- II.2 Uniform Resampling.- II.3 Linear Approximation.- II.4 Centring Method.- II.5 Balanced Resampling.- II.6 Antithetic Resampling.- II.7 Importance Resampling.- II.7.1 Introduction.- II.7.2 Concept of Importance Resampling.- II.7.3 Importance Resampling for Approximating Bias, Variance, Skewness, etc..- II.7.4 Importance Resampling for a Distribution Function.- II.8 Quantile Estimation.- Appendix III: Confidence Pictures.- Appendix IV: A Non-Standard Example: Quantite Error Estimation.- IV. 1 Introduction.- IV.2 Definition of the Mean Squared Error Estimate.- IV.3 Convergence Rate of the Mean Squared Error Estimate.- IV.4 Edgeworth Expansions for the Studentized Bootstrap Quantile Estimate.- Appendix V: A Non-Edgeworth View of the Bootstrap.- References.- Author Index.

2,306 citations

Posted Content
TL;DR: In this article, the effects of bias correction on confidence interval coverage in the context of kernel density and local polynomial regression estimation were studied. But bias correction can be preferred to undersmoothing for minimizing coverage error and increasing robustness to tuning parameter choice.
Abstract: Nonparametric methods play a central role in modern empirical work. While they provide inference procedures that are more robust to parametric misspecification bias, they may be quite sensitive to tuning parameter choices. We study the effects of bias correction on confidence interval coverage in the context of kernel density and local polynomial regression estimation, and prove that bias correction can be preferred to undersmoothing for minimizing coverage error and increasing robustness to tuning parameter choice. This is achieved using a novel, yet simple, Studentization, which leads to a new way of constructing kernel-based bias-corrected confidence intervals. In addition, for practical cases, we derive coverage error optimal bandwidths and discuss easy-to-implement bandwidth selectors. For interior points, we show that the MSE-optimal bandwidth for the original point estimator (before bias correction) delivers the fastest coverage error decay rate after bias correction when second-order (equivalent) kernels are employed, but is otherwise suboptimal because it is too "large". Finally, for odd-degree local polynomial regression, we show that, as with point estimation, coverage error adapts to boundary points automatically when appropriate Studentization is used; however, the MSE-optimal bandwidth for the original point estimator is suboptimal. All the results are established using valid Edgeworth expansions and illustrated with simulated data. Our findings have important consequences for empirical work as they indicate that bias-corrected confidence intervals, coupled with appropriate standard errors, have smaller coverage error and are less sensitive to tuning parameter choices in practically relevant cases where additional smoothness is available.

202 citations

Journal ArticleDOI
TL;DR: In this paper, the authors consider studentized tests in time series regressions with nonparametrically autocorrelated errors and show that for typical economic time series, the optimal bandwidth that minimizes a weighted average of type I and type II errors is larger by an order of magnitude than the bandwidth which minimizes the asymptotic mean squared error of the corresponding long-run variance estimator.
Abstract: The paper considers studentized tests in time series regressions with nonparametrically autocorrelated errors. The studentization is based on robust standard errors with truncation lag M = bT for some constant b 2 (0;1] and sample size T: It is shown that the nonstandard …xed-b limit distributions of such nonparametrically studentized tests provide more accurate approximations to the …nite sample distributions than the standard small-b limit distribution. We further show that, for typical economic time series, the optimal bandwidth that minimizes a weighted average of type I and type II errors is larger by an order of magnitude than the bandwidth that minimizes the asymptotic mean squared error of the corresponding long-run variance estimator. A plug-in procedure for implementing this optimal bandwidth is suggested and simulations (not reported here) con…rm that the new plug-in procedure works well in …nite samples.

174 citations

Journal ArticleDOI
TL;DR: In this article, the effects of bias correction on confidence interval coverage in the context of kernel density and local polynomial regression estimation are studied. But bias correction can be preferred to undersmoothing for minimizing coverage error and increasing robustness to tuning parameter choice.
Abstract: Nonparametric methods play a central role in modern empirical work. While they provide inference procedures that are more robust to parametric misspecification bias, they may be quite sensitive to tuning parameter choices. We study the effects of bias correction on confidence interval coverage in the context of kernel density and local polynomial regression estimation, and prove that bias correction can be preferred to undersmoothing for minimizing coverage error and increasing robustness to tuning parameter choice. This is achieved using a novel, yet simple, Studentization, which leads to a new way of constructing kernel-based bias-corrected confidence intervals. In addition, for practical cases, we derive coverage error optimal bandwidths and discuss easy-to-implement bandwidth selectors. For interior points, we show that the MSE-optimal bandwidth for the original point estimator (before bias correction) delivers the fastest coverage error decay rate after bias correction when second-order (equi...

162 citations

Network Information
Related Topics (5)
Asymptotic distribution
16.7K papers, 564.9K citations
76% related
Nonparametric statistics
19.9K papers, 844.1K citations
71% related
Estimator
97.3K papers, 2.6M citations
69% related
Statistical inference
11.2K papers, 604.4K citations
69% related
Likelihood function
10.3K papers, 517.3K citations
68% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20213
20203
20192
20183
20174
20161