scispace - formally typeset
Search or ask a question
Topic

Resampling

About: Resampling is a research topic. Over the lifetime, 5428 publications have been published within this topic receiving 242291 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The resampling-based method relies on an efficient technique for estimating the Hessian matrix, introduced as part of the adaptive form of the simultaneous perturbation stochastic approximation (SPSA) optimization algorithm.
Abstract: The Fisher information matrix summarizes the amount of information in the data relative to the quantities of interest. There are many applications of the information matrix in modeling, systems analysis, and estimation, including confidence region calculation, input design, prediction bounds, and “noninformative” priors for Bayesian analysis. This article reviews some basic principles associated with the information matrix, presents a resampling-based method for computing the information matrix together with some new theory related to efficient implementation, and presents some numerical results. The resampling-based method relies on an efficient technique for estimating the Hessian matrix, introduced as part of the adaptive (“second-order”) form of the simultaneous perturbation stochastic approximation (SPSA) optimization algorithm.

88 citations

01 Jan 1990
TL;DR: In this paper, several procedures from the bootstrap literature are reviewed and, because no single method has emerged as the best solution for all problems, some new methods are presented as well.
Abstract: Summary The fundamental problem addressed in this paper is the problem of constructing confidence limits for a functional of a distribution in nonparametric settings. Specifically, given a random sample of observations from a distribution F, interest focuses on constructing confidence limits for a real valued functional 0 of F. Several procedures from the bootstrap literature are reviewed and, because no single method has emerged as the best solution for all problems, some new methods are presented as well. These new methods are motivated by an appropriate reduction of the nonparametric problem to a parametric problem with no nuisance parameters via the construction of a least favorable family. The coverage error of an approximate confidence limit is the difference between the exact coverage probability and the nominal level. AH the procedures are compared by determining the exact order of coverage error of each method.

88 citations

Journal ArticleDOI
TL;DR: The intrinsic regression model, which is a semiparametric model, uses a link function to map from the Euclidean space of covariates to the Riemannian manifold of positive-definite matrices, and develops an estimation procedure to calculate parameter estimates and establish their limiting distributions.
Abstract: The aim of this paper is to develop an intrinsic regression model for the analysis of positive-definite matrices as responses in a Riemannian manifold and their association with a set of covariates, such as age and gender, in a Euclidean space. The primary motivation and application of the proposed methodology is in medical imaging. Because the set of positive-definite matrices do not form a vector space, directly applying classical multivariate regression may be inadequate in establishing the relationship between positive-definite matrices and covariates of interest, such as age and gender, in real applications. Our intrinsic regression model, which is a semiparametric model, uses a link function to map from the Euclidean space of covariates to the Riemannian manifold of positive-definite matrices. We develop an estimation procedure to calculate parameter estimates and establish their limiting distributions. We develop score statistics to test linear hypotheses on unknown parameters and develop a test procedure based on a resampling method to simultaneously assess the statistical significance of linear hypotheses across a large region of interest. Simulation studies are used to demonstrate the methodology and examine the finite sample performance of the test procedure for controlling the family-wise error rate. We apply our methods to the detection of statistical significance of diagnostic effects on the integrity of white matter in a diffusion tensor study of human immunodeficiency virus. Supplemental materials for this article are available online.

88 citations

Journal ArticleDOI
TL;DR: A general new method for testing the conditional independence of variables X and Y given a potentially high dimensional random vector Z that may contain confounding factors, and establishes bounds on the type I error in terms of the error in the approximation of the conditional distribution of X|Z.
Abstract: We propose a general new method, the conditional permutation test, for testing the conditional independence of variables X and Y given a potentially high dimensional random vector Z that may contain confounding factors. The test permutes entries of X non‐uniformly, to respect the existing dependence between X and Z and thus to account for the presence of these confounders. Like the conditional randomization test of Candes and co‐workers in 2018, our test relies on the availability of an approximation to the distribution of X|Z—whereas their test uses this estimate to draw new X‐values, for our test we use this approximation to design an appropriate non‐uniform distribution on permutations of the X‐values already seen in the true data. We provide an efficient Markov chain Monte Carlo sampler for the implementation of our method and establish bounds on the type I error in terms of the error in the approximation of the conditional distribution of X|Z, finding that, for the worst‐case test statistic, the inflation in type I error of the conditional permutation test is no larger than that of the conditional randomization test. We validate these theoretical results with experiments on simulated data and on the Capital Bikeshare data set.

88 citations

Journal ArticleDOI
TL;DR: In this article, several types of bootstrap confidence intervals are studied, and the type of interval is determined by two factors: drawing the bootstrap sample and the method of forming the confidence interval from a given sample.
Abstract: We study bootstrap confidence intervals for three types of parameters in Cox's proportional hazards model: the regression parameter, the survival function at fixed time points, and the median survival time at fixed values of a covariate. Several types of bootstrap confidence intervals are studied, and the type of interval is determined by two factors. One factor is the method of drawing the bootstrap sample. We consider three such methods: (1) ordinary resampling from the empirical cumulative distribution function, (2) resampling conditional on the covariates, and (3) resampling conditional on the covariates and the censoring pattern. Another factor is the method of forming the confidence interval from a given sample; the methods considered are the percentile, hybrid, and bootstrap-t. All the methods of forming confidence intervals are compared to each other and to the standard asymptotic method via a Monte Carlo study. The data sets for this Monte Carlo study are simulated conditionally on the c...

88 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
89% related
Inference
36.8K papers, 1.3M citations
87% related
Sampling (statistics)
65.3K papers, 1.2M citations
86% related
Regression analysis
31K papers, 1.7M citations
86% related
Markov chain
51.9K papers, 1.3M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20251
20242
2023377
2022759
2021275
2020279