scispace - formally typeset
Search or ask a question
Topic

Coverage probability

About: Coverage probability is a research topic. Over the lifetime, 2479 publications have been published within this topic receiving 53259 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the frequency properties of Wahba's Bayesian confidence intervals for smoothing splines are investigated by a large-sample approximation and by a simulation study, and the authors explain why the ACP is accurate for functions that are much smoother than the sample paths prescribed by the prior.
Abstract: The frequency properties of Wahba's Bayesian confidence intervals for smoothing splines are investigated by a large-sample approximation and by a simulation study. When the coverage probabilities for these pointwise confidence intervals are averaged across the observation points, the average coverage probability (ACP) should be close to the nominal level. From a frequency point of view, this agreement occurs because the average posterior variance for the spline is similar to a consistent estimate of the average squared error and because the average squared bias is a modest fraction of the total average squared error. These properties are independent of the Bayesian assumptions used to derive this confidence procedure, and they explain why the ACP is accurate for functions that are much smoother than the sample paths prescribed by the prior. This analysis accounts for the choice of the smoothing parameter (bandwidth) using cross-validation. In the case of natural splines an adaptive method for avo...

274 citations

Journal ArticleDOI
TL;DR: The β-GPP is introduced and promoted, which is an intermediate class between the PPP and the GPP, as a model for wireless networks when the nodes exhibit repulsion and it is found that the fitted β- GPP can closely model the deployment of actual base stations in terms of coverage probability and other statistics.
Abstract: The spatial structure of transmitters in wireless networks plays a key role in evaluating mutual interference and, hence, performance. Although the Poisson point process (PPP) has been widely used to model the spatial configuration of wireless networks, it is not suitable for networks with repulsion. The Ginibre point process (GPP) is one of the main examples of determinantal point processes that can be used to model random phenomena where repulsion is observed. Considering the accuracy, tractability, and practicability tradeoffs, we introduce and promote the $\beta$ -GPP, which is an intermediate class between the PPP and the GPP, as a model for wireless networks when the nodes exhibit repulsion. To show that the model leads to analytically tractable results in several cases of interest, we derive the mean and variance of the interference using two different approaches: the Palm measure approach and the reduced second-moment approach, and then provide approximations of the interference distribution by three known probability density functions. In addition, to show that the model is relevant for cellular systems, we derive the coverage probability of a typical user and find that the fitted $\beta$ -GPP can closely model the deployment of actual base stations in terms of coverage probability and other statistics.

255 citations

Journal ArticleDOI
TL;DR: In this article, the authors make two points about the effect of the number of bootstrap simulations, B, on percentile-t bootstrap confidence intervals: coverage probability and distance of the simulated critical point form the true critical point derived with B = infinity.
Abstract: : The purpose of this document is to make two points about the effect of the number of bootstrap simulations, B, on percentile-t bootstrap confidence intervals. The first point concerns coverage probability; the second, distance of the simulated critical point form the true critical point derived with B=infinity. In both cases the author has in mind applications to smooth statistics, such as the Studentized mean of a sample drawn from a continuous distribution. He indicates the change that have to be made if the distribution of the statistic is not smooth. Additional keywords: Exponentive functions.

252 citations

Journal ArticleDOI
TL;DR: This paper discusses an alternative simple approach for constructing the confidence interval, based on the t-distribution, which has improved coverage probability and is easy to calculate, and unlike some methods suggested in the statistical literature, no iterative computation is required.
Abstract: In the context of a random effects model for meta-analysis, a number of methods are available to estimate confidence limits for the overall mean effect. A simple and commonly used method is the DerSimonian and Laird approach. This paper discusses an alternative simple approach for constructing the confidence interval, based on the t-distribution. This approach has improved coverage probability compared to the DerSimonian and Laird method. Moreover, it is easy to calculate, and unlike some methods suggested in the statistical literature, no iterative computation is required.

240 citations

Journal ArticleDOI
TL;DR: In this article, a new cost function is designed for shortening length of prediction intervals without compromising their coverage probability, and simulated annealing is used for minimization of this cost function and adjustment of neural network parameters.
Abstract: Short-term load forecasting is fundamental for the reliable and efficient operation of power systems. Despite its importance, accurate prediction of loads is problematic and far remote. Often uncertainties significantly degrade performance of load forecasting models. Besides, there is no index available indicating reliability of predicted values. The objective of this study is to construct prediction intervals for future loads instead of forecasting their exact values. The delta technique is applied for constructing prediction intervals for outcomes of neural network models. Some statistical measures are developed for quantitative and comprehensive evaluation of prediction intervals. According to these measures, a new cost function is designed for shortening length of prediction intervals without compromising their coverage probability. Simulated annealing is used for minimization of this cost function and adjustment of neural network parameters. Demonstrated results clearly show that the proposed methods for constructing prediction interval outperforms the traditional delta technique. Besides, it yields prediction intervals that are practically more reliable and useful than exact point predictions.

222 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
86% related
Statistical hypothesis testing
19.5K papers, 1M citations
80% related
Linear model
19K papers, 1M citations
79% related
Markov chain
51.9K papers, 1.3M citations
79% related
Multivariate statistics
18.4K papers, 1M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
202363
2022153
2021142
2020151
2019142