Topic
Coverage probability
About: Coverage probability is a research topic. Over the lifetime, 2479 publications have been published within this topic receiving 53259 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this paper, a new generalized pivotal is proposed based on the best linear unbiased estimator of the common mean and the generalized inference, and an exact confidence interval is also derived.
39 citations
••
TL;DR: The proposed algorithm preserves the computational features of trim and fill and adds only an assumption of symmetry in the hypothesized distribution of the measured covariate and is applied to an analysis of the effect of cognitive-behavioral therapy on the risk of recidivism.
Abstract: Trim and fill is a popular method of accounting for publication bias in meta-analysis. However, the use of trim and fill is limited to the setting in which all meta-analyzed studies represent a true common effect. In many practical settings, within-study effect estimates are a function of some covariate. Because methods of accounting for publication bias in meta-regression have received little attention, we propose here a generalization of trim and fill for application in meta-regression. The proposed algorithm preserves the computational features of trim and fill and adds only an assumption of symmetry in the hypothesized distribution of the measured covariate. By simulation, we evaluate properties (mean bias, root mean squared error, and coverage probability) of meta-regression parameter estimates and corresponding confidence intervals with application of the proposed algorithm in a range of scenarios, including violation of the aforementioned assumption of symmetry. We also evaluate the performance of common estimators of the number of suppressed studies. In general, we show that the proposed algorithm is successful in identifying suppression of studies and reducing the bias in regression parameters derived from the analysis of the augmented set of studies. We apply the proposed algorithm to an analysis of the effect of cognitive-behavioral therapy on the risk of recidivism. Copyright © 2012 John Wiley & Sons, Ltd.
39 citations
••
TL;DR: In this paper, confidence intervals for the variance of a normal distribution with unknown mean are constructed which improve upon the usual shortest interval based on the sample variance alone, and the posterior probabilities of the intervals are examined numerically.
Abstract: Confidence intervals for the variance of a normal distribution with unknown mean are constructed which improve upon the usual shortest interval based on the sample variance alone. These intervals have guaranteed coverage probability uniformly greater than a predetermined value $1-\alpha$ and have uniformly shorter length. Using information relating the size of the samples mean to that of the sample variance, we smoothly shift the usual minimum length interval closer to zero, simultaneously bringing the endpoints closer to each other. The gains in coverage probability and expected length are also investigated numerically. Lastly, we examine the posterior probabilities of the intervals, quantities which can be used as post-data confidence reports.
39 citations
••
TL;DR: This paper presents the downlink coverage and rate analysis of a cellular vehicle-to-everything (C-V2X) communication network where the locations of vehicular nodes and road side units (RSUs) are modeled as Cox processes driven by a Poisson line process (PLP) and the location of cellular macro base stations (MBSs) are modeling as a 2D Poisson point process (PPP).
Abstract: In this paper, we present the downlink coverage and rate analysis of a cellular vehicle-to-everything (C-V2X) communication network where the locations of vehicular nodes and road side units (RSUs) are modeled as Cox processes driven by a Poisson line process (PLP) and the locations of cellular macro base stations (MBSs) are modeled as a 2D Poisson point process (PPP). Assuming a fixed selection bias and maximum average received power based association, we compute the probability with which a typical receiver , an arbitrarily chosen receiving node, connects to a vehicular node or an RSU and a cellular MBS. For this setup, we derive the signal-to-interference ratio (SIR)-based coverage probability of the typical receiver. One of the key challenges in the computation of coverage probability stems from the inclusion of shadowing effects. As the standard procedure of interpreting the shadowing effects as random displacement of the location of nodes is not directly applicable to the Cox process, we propose an approximation of the spatial model inspired by the asymptotic behavior of the Cox process. Using this asymptotic characterization, we derive the coverage probability in terms of the Laplace transform of interference power distribution. Further, we compute the downlink rate coverage of the typical receiver by characterizing the load on the serving vehicular nodes or RSUs and serving MBSs. We also provide several key design insights by studying the trends in the coverage probability and rate coverage as a function of network parameters. We observe that the improvement in rate coverage obtained by increasing the density of MBSs can be equivalently achieved by tuning the selection bias appropriately without the need to deploy additional MBSs.
39 citations
••
TL;DR: In this article, the authors investigate the cost of using the exact one and two-sided Clopper-Pearson confidence intervals rather than shorter approximate intervals, first in terms of increased expected length and then the increase in sample size required to obtain a desired expected length.
Abstract: When computing a confidence interval for a binomial proportion $p$ one must choose between using an exact interval, which has a coverage probability of at least $1-\alpha$ for all values of $p$, and a shorter approximate interval, which may have lower coverage for some $p$ but that on average has coverage equal to $1-\alpha$. We investigate the cost of using the exact one and two-sided Clopper–Pearson confidence intervals rather than shorter approximate intervals, first in terms of increased expected length and then in terms of the increase in sample size required to obtain a desired expected length. Using asymptotic expansions, we also give a closed-form formula for determining the sample size for the exact Clopper–Pearson methods. For two-sided intervals, our investigation reveals an interesting connection between the frequentist Clopper–Pearson interval and Bayesian intervals based on noninformative priors.
39 citations