scispace - formally typeset
Search or ask a question
Topic

Resampling

About: Resampling is a research topic. Over the lifetime, 5428 publications have been published within this topic receiving 242291 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents a meta-analyses of eight fisheries management practices in the Northern Hemisphere over a 25-year period (1995-2009) that shows clear trends in harvests and in particular in the performance of four of the five major harvesting policies.
Abstract: Scientific advice to fishery managers needs to be expressed in probabilistic terms to convey uncertainty about the consequences of alternative harvesting policies (policy performance indices). In most Bayesian approaches to such advice, relatively few of the model parameters used can be treated as uncertain, and deterministic assumptions about population dynamics are required; this can bias the degree of certainty and estimates of policy performance indices. We reformulate a Bayesian approach that uses the sampling/importance resampling algorithm to improve estimates of policy performance indices; it extends the number of parameters that can be treated as uncertain, does not require deterministic assumptions about population dynamics, and can use any of the types of fishery assessment models and data. Application of the approach to New Zealand's western stock of hoki (Macruronus novaezelandiae) shows that the use of Bayesian prior information for parameters such as the constant of proportionality for acou...

172 citations

Journal ArticleDOI
TL;DR: A Monte Carlo study analyzing the performance of the bootstrap confidence bands (obtained with different resampling methods) of several functional estimators is presented, providing some insights on the asymptotic validity of thebootstrap methodology when functional data, as well as a functional parameter, are involved.

171 citations

Journal ArticleDOI
TL;DR: It is shown that the weighted ensemble technique is statistically exact for a wide class of Markovian and non-Markovian dynamics, including the use of bins which can adaptively find the target state in a simple model.
Abstract: The “weighted ensemble” method, introduced by Huber and Kim [Biophys. J. 70, 97 (1996)], is one of a handful of rigorous approaches to path sampling of rare events. Expanding earlier discussions, we show that the technique is statistically exact for a wide class of Markovian and non-Markovian dynamics. The derivation is based on standard path-integral (path probability) ideas, but recasts the weighted-ensemble approach as simple “resampling” in path space. Similar reasoning indicates that arbitrary nonstatic binning procedures, which merely guide the resampling process, are also valid. Numerical examples confirm the claims, including the use of bins which can adaptively find the target state in a simple model.

170 citations

Journal ArticleDOI
TL;DR: In this article, a new resampling procedure, the dependent wild bootstrap, was proposed for stationary time series, which can be easily extended to irregularly spaced time series with no implementational difficulty.
Abstract: We propose a new resampling procedure, the dependent wild bootstrap, for stationary time series. As a natural extension of the traditional wild bootstrap to time series setting, the dependent wild bootstrap offers a viable alternative to the existing block-based bootstrap methods, whose properties have been extensively studied over the last two decades. Unlike all of the block-based bootstrap methods, the dependent wild bootstrap can be easily extended to irregularly spaced time series with no implementational difficulty. Furthermore, it preserves the favorable bias and mean squared error property of the tapered block bootstrap, which is the state-of-the-art block-based method in terms of asymptotic accuracy of variance estimation and distribution approximation. The consistency of the dependent wild bootstrap in distribution approximation is established under the framework of the smooth function model. In addition, we obtain the bias and variance expansions of the dependent wild bootstrap variance estimat...

170 citations

Journal ArticleDOI
TL;DR: This work provides guidance towards the SIR workflow, i.e., which proposal distribution to choose and how many parameter vectors to sample when performing SIR, using diagnostics developed for this purpose.
Abstract: Taking parameter uncertainty into account is key to make drug development decisions such as testing whether trial endpoints meet defined criteria. Currently used methods for assessing parameter uncertainty in NLMEM have limitations, and there is a lack of diagnostics for when these limitations occur. In this work, a method based on sampling importance resampling (SIR) is proposed, which has the advantage of being free of distributional assumptions and does not require repeated parameter estimation. To perform SIR, a high number of parameter vectors are simulated from a given proposal uncertainty distribution. Their likelihood given the true uncertainty is then approximated by the ratio between the likelihood of the data given each vector and the likelihood of each vector given the proposal distribution, called the importance ratio. Non-parametric uncertainty distributions are obtained by resampling parameter vectors according to probabilities proportional to their importance ratios. Two simulation examples and three real data examples were used to define how SIR should be performed with NLMEM and to investigate the performance of the method. The simulation examples showed that SIR was able to recover the true parameter uncertainty. The real data examples showed that parameter 95 % confidence intervals (CI) obtained with SIR, the covariance matrix, bootstrap and log-likelihood profiling were generally in agreement when 95 % CI were symmetric. For parameters showing asymmetric 95 % CI, SIR 95 % CI provided a close agreement with log-likelihood profiling but often differed from bootstrap 95 % CI which had been shown to be suboptimal for the chosen examples. This work also provides guidance towards the SIR workflow, i.e.,which proposal distribution to choose and how many parameter vectors to sample when performing SIR, using diagnostics developed for this purpose. SIR is a promising approach for assessing parameter uncertainty as it is applicable in many situations where other methods for assessing parameter uncertainty fail, such as in the presence of small datasets, highly nonlinear models or meta-analysis.

168 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
89% related
Inference
36.8K papers, 1.3M citations
87% related
Sampling (statistics)
65.3K papers, 1.2M citations
86% related
Regression analysis
31K papers, 1.7M citations
86% related
Markov chain
51.9K papers, 1.3M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20251
20242
2023377
2022759
2021275
2020279