scispace - formally typeset
Search or ask a question
Topic

Resampling

About: Resampling is a research topic. Over the lifetime, 5428 publications have been published within this topic receiving 242291 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: McMurry et al. as mentioned in this paper proposed an estimator that leaves the main diagonals of the sample autocovariance matrix intact while gradually down-weighting o'-diagonal entries towards zero.
Abstract: Author(s): McMurry, Timothy L; Politis, D N | Abstract: We address the problem of estimating the autocovariance matrix of a stationary process. Under short range dependence assumptions, convergence rates are established for a gradually tapered version of the sample autocovariance matrix and for its inverse. The proposed estimator is formed by leaving the main diagonals of the sample autocovariance matrix intact while gradually down-weighting o�-diagonal entries towards zero. In addition we show the same convergence rates hold for a positive de�nite version of the estimator, and we introduce a new approach for selecting the banding parameter. The new matrix estimator is shown to perform well theoretically and in simulation studies. As an application we introduce a new resampling scheme for stationary processes termed the linear process bootstrap (LPB). The LPB is shown to be asymptotically valid for the sample mean and related statistics. The e�ectiveness of the proposed methods are demonstrated in a simulation study.

99 citations

Journal ArticleDOI
TL;DR: In this paper, a censored quantile instrumental variable (CQIV) estimator is proposed, which combines Powell's CQR with a control variable approach to incorporate endogenous regressors.

99 citations

Proceedings Article
21 Jun 2014
TL;DR: This work devise a procedure for detecting concept drifts in data-streams that relies on analyzing the empirical loss of learning algorithms by obtaining statistics from the loss distribution by reusing the data multiple times via resampling.
Abstract: Detecting changes in data-streams is an important part of enhancing learning quality in dynamic environments. We devise a procedure for detecting concept drifts in data-streams that relies on analyzing the empirical loss of learning algorithms. Our method is based on obtaining statistics from the loss distribution by reusing the data multiple times via resampling. We present theoretical guarantees for the proposed procedure based on the stability of the underlying learning algorithms. Experimental results show that the method has high recall and precision, and performs well in the presence of noise.

99 citations

Posted Content
TL;DR: In this paper, the authors develop a nonparametric QR-series framework for performing inference on the entire conditional quantile function and its linear functionals, which is a principal regression method for analyzing the impact of covariates on outcomes, and they demonstrate the practical utility of these results with an example, where they estimate the price elasticity function and test the Slutsky condition of the individual demand for gasoline, as indexed by the individual propensity for gasoline consumption.
Abstract: Quantile regression (QR) is a principal regression method for analyzing the impact of covariates on outcomes. The impact is described by the conditional quantile function and its functionals. In this paper we develop the nonparametric QR-series framework, covering many regressors as a special case, for performing inference on the entire conditional quantile function and its linear functionals. In this framework, we approximate the entire conditional quantile function by a linear combination of series terms with quantile-specific coefficients and estimate the function-valued coefficients from the data. We develop large sample theory for the QR-series coefficient process, namely we obtain uniform strong approximations to the QR-series coefficient process by conditionally pivotal and Gaussian processes. Based on these strong approximations, or couplings, we develop four resampling methods (pivotal, gradient bootstrap, Gaussian, and weighted bootstrap) that can be used for inference on the entire QR-series coefficient function. We apply these results to obtain estimation and inference methods for linear functionals of the conditional quantile function, such as the conditional quantile function itself, its partial derivatives, average partial derivatives, and conditional average partial derivatives. Specifically, we obtain uniform rates of convergence and show how to use the four resampling methods mentioned above for inference on the functionals. All of the above results are for function-valued parameters, holding uniformly in both the quantile index and the covariate value, and covering the pointwise case as a by-product. We demonstrate the practical utility of these results with an example, where we estimate the price elasticity function and test the Slutsky condition of the individual demand for gasoline, as indexed by the individual unobserved propensity for gasoline consumption.

99 citations

Book
01 Jan 2001
TL;DR: Permutation tests are a paradox of old and new as mentioned in this paper, where a test statistic is computed on the observed data, then the data are permuted over all possible arrangements of the data, an exact permutation test, or a moment approximation test.
Abstract: Permutation tests are a paradox of old and new. Permutation tests pre-date most traditional parametric statistics, but only recently have become part of the mainstream discussion regarding statistical testing. Permutation tests follow a permutation or 'conditional on errors' model whereby a test statistic is computed on the observed data, then 1 the data are permuted over all possible arrangements of the data-an exact permutation test; 2 the data are used to calculate the exact moments of the permutation distribution-a moment approximation permutation test; or 3 the data are permuted over a subset of all possible arrangements of the data-a resampling approximation permutation test. The earliest permutation tests date from the 1920s, but it was not until the advent of modern day computing that permutation tests became a practical alternative to parametric statistical tests. In recent years, permutation analogs of existing statistical tests have been developed. These permutation tests provide noteworthy advantages over their parametric counterparts for small samples and populations, or when distributional assumptions cannot be met. Unique permutation tests have also been developed that allow for the use of Euclidean distance rather than the squared Euclidean distance that is typically employed in parametric tests. This overview provides a chronology of the development of permutation tests accompanied by a discussion of the advances in computing that made permutation tests feasible. Attention is paid to the important differences between 'population models' and 'permutation models', and between tests based on Euclidean and squared Euclidean distances. WIREs Comp Stat 2011 3 527-542 DOI: 10.1002/wics.177

98 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
89% related
Inference
36.8K papers, 1.3M citations
87% related
Sampling (statistics)
65.3K papers, 1.2M citations
86% related
Regression analysis
31K papers, 1.7M citations
86% related
Markov chain
51.9K papers, 1.3M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20251
20242
2023377
2022759
2021275
2020279