scispace - formally typeset
Search or ask a question
Topic

Coverage probability

About: Coverage probability is a research topic. Over the lifetime, 2479 publications have been published within this topic receiving 53259 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: To identify the method best suited for small proportions, seven approximate methods and the Clopper–Pearson Exact method to calculate CIs were compared.
Abstract: Purpose It is generally agreed that a confidence interval (CI) is usually more informative than a point estimate or p-value, but we rarely encounter small proportions with CI in the pharmacoepidemiological literature. When a CI is given it is sporadically reported, how it was calculated. This incorrectly suggests one single method to calculate CIs. To identify the method best suited for small proportions, seven approximate methods and the Clopper-Pearson Exact method to calculate CIs were compared. Methods In a simulation study for 90-, 95- and 99%CIs, with sample size 1000 and proportions ranging from 0.001 to 0.01, were evaluated systematically. Main quality criteria were coverage and interval width. The methods are illustrated using data from pharmacoepidemiology studies. Results Simulations showed that standard Wald methods have insufficient coverage probability regardless of how the desired coverage is perceived. Overall, the Exact method and the Score method with continuity correction (CC) performed best. Real life examples showed the methods to yield different results too. Conclusions For CIs for small proportions (pi

47 citations

Journal ArticleDOI
TL;DR: In this article, the authors describe methods for the construction of a confidence interval for median survival time based on right-censored data, where the overall probability that all intervals contain the true median is guaranteed at a fixed level.
Abstract: SUMMARY We describe methods for the construction of a confidence interval for median survival time based on right-censored data. These methods are extended to the construction of repeated confidence intervals for the median, based on accumulating data; here, the overall probability that all intervals contain the true median is guaranteed at a fixed level. The use of repeated confidence intervals for median survival time in post- marketing surveillance is discussed. A confidence interval for median survival time provides a useful summary of the survival experience of a group of patients. If confidence intervals are calculated repeatedly, as data accumulates, the probability that at least one interval fails to contain the median may be much higher than the error rate for a single interval, and if these confidence intervals are used in a decision making process the probability of an incorrect decision increases accordingly. Jennison & Turnbull (1984) have proposed methods for calculating repeated confidence intervals appropriate to such situations. Similar ideas have also been discussed by Lai (1984). In ? 2 we propose a new form of single-sample nonparametric confidence interval; this interval has asymptotically correct coverage probability and Monte Carlo simulations suggest it is superior to its competitors for small sample sizes. Repeated confidence intervals for the median are presented in ? 3 and their stnall sample size performance is assessed by Monte Carlo simulation; an example of their use is given in ? 4. All the methods considered can easily be modified to give confidence intervals for other quantiles or for the survival probability at a fixed time.

47 citations

Journal ArticleDOI
TL;DR: This paper develops an analytical framework for the evaluation of the coverage probability, or equivalently the complementary cumulative density function (CCDF) of signal-to-interference-and-noise ratio (SINRinline-formula> distribution, which was not possible using the existing PPP-based models.
Abstract: Owing to its flexibility in modeling real-world spatial configurations of users and base stations (BSs), the Poisson cluster process (PCP) has recently emerged as an appealing way to model and analyze heterogeneous cellular networks (HetNets). Despite its undisputed relevance to HetNets—corroborated by the models used in the industry—the PCP’s use in performance analysis has been limited. This is primarily because of the lack of analytical tools to characterize the performance metrics, such as the coverage probability of a user connected to the strongest BS. In this paper, we develop an analytical framework for the evaluation of the coverage probability, or equivalently the complementary cumulative density function (CCDF) of signal-to-interference-and-noise ratio ( SINR ), of a typical user in a $K$ -tier HetNet under a $\max $ power-based association strategy, where the BS locations of each tier follow either a Poisson point process (PPP) or a PCP. The key enabling step involves conditioning on the parent PPPs of all the PCPs, which allows us to express the coverage probability as a product of sum-product and probability generating functionals (PGFLs) of the parent PPPs. In addition to several useful insights, our analysis provides a rigorous way to study the impact of the cluster size on the ${\it SINR}$ distribution, which was not possible using the existing PPP-based models.

47 citations

Journal ArticleDOI
TL;DR: A prediction interval-based model for modeling the uncertainties of tidal current prediction based on support vector regression (SVR) and a nonparametric method called a lower upper bound estimation (LUBE) method is proposed.
Abstract: This paper proposes a prediction interval-based model for modeling the uncertainties of tidal current prediction. The proposed model constructs the optimal prediction intervals (PIs) based on support vector regression (SVR) and a nonparametric method called a lower upper bound estimation (LUBE) method. In order to increase the modeling stability of SVRs that are used in the LUBE method, the idea of combined prediction intervals is employed. As the optimization tool, a flower pollination algorithm along with a two-phase modification method is presented to optimize the SVR parameters. The proposed model employs fuzzy membership functions to provide appropriate balance between the PI coverage probability (PICP) and PI normalized average width (PINAW), independently. The performance of the proposed model is examined on the practical tidal current data collected from the Bay of Fundy, NS, Canada.

46 citations

Journal ArticleDOI
TL;DR: A new estimator of the common odds ratio in one-to-one matched case-control studies is proposed and is found to be more efficient than the conditional maximum likelihood estimator without being as biased as the estimator that ignores matching.
Abstract: A new estimator of the common odds ratio in one-to-one matched case-control studies is proposed. The connection between this estimator and the James-Stein estimating procedure is highlighted through the argument of estimating functions. Comparisons are made between this estimator, the conditional maximum likelihood estimator, and the estimator ignoring the matching in terms of finite sample bias, mean squared error, coverage probability, and length of confidence interval. In many situations, the new estimator is found to be more efficient than the conditional maximum likelihood estimator without being as biased as the estimator that ignores matching. The extension to multiple risk factors is also outlined.

46 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
86% related
Statistical hypothesis testing
19.5K papers, 1M citations
80% related
Linear model
19K papers, 1M citations
79% related
Markov chain
51.9K papers, 1.3M citations
79% related
Multivariate statistics
18.4K papers, 1M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
202363
2022153
2021142
2020151
2019142