scispace - formally typeset
Search or ask a question
Topic

Coverage probability

About: Coverage probability is a research topic. Over the lifetime, 2479 publications have been published within this topic receiving 53259 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors proposed the false coverage-statement rate (FCR) as a measure of interval coverage following selection, and proposed a general procedure to construct a marginal CI for each selected parameter, but instead of the confidence level 1 − q being used marginally, q is divided by the number of parameters considered and multiplied by the selected.
Abstract: Often in applied research, confidence intervals (CIs) are constructed or reported only for parameters selected after viewing the data. We show that such selected intervals fail to provide the assumed coverage probability. By generalizing the false discovery rate (FDR) approach from multiple testing to selected multiple CIs, we suggest the false coverage-statement rate (FCR) as a measure of interval coverage following selection. A general procedure is then introduced, offering FCR control at level q under any selection rule. The procedure constructs a marginal CI for each selected parameter, but instead of the confidence level 1 − q being used marginally, q is divided by the number of parameters considered and multiplied by the number selected. If we further use the FDR controlling testing procedure of Benjamini and Hochberg for selecting the parameters, the newly suggested procedure offers CIs that are dual to the testing procedure and are shown to be optimal in the independent case. Under the positive re...

591 citations

Journal ArticleDOI
TL;DR: In this article, it was shown that a confidence set which does not satisfy this characterization has zero coverage probability (level) in the neighborhood of non-identification subsets and will have a nonzero probability of being unbounded under any distribution compatible with the model.
Abstract: General characterizations of valid confidence sets and tests in problems which involve locally almost unidentified (LAU) parameters are provided and applied to several econo- metric models. Two types of inference problems are studied: (i) inference about parame- ters which are not identifiable on certain subsets of the parameter space, and (ii) inference about parameter transformations with discontinuities. When a LAU parameter or parametric function has an unbounded range, it is shown under general regularity conditions that any valid confidence set with level 1 - a for this parameter must be unbounded with probability close to 1 - a in the neighborhood of nonidentification subsets and will have a nonzero probability of being unbounded under any distribution compatible with the model: no valid confidence set which is almost surely bounded does exist. These properties hold even if "identifying restrictions" are imposed. Similar results also obtain for parameters with bounded ranges. Consequently, a confidence set which does not satisfy this characterization has zero coverage probability (level). This will be the case in particular for Wald-type confidence intervals based on asymptotic standard errors. Furthermore, Wald-type statistics for testing given values of a LAU parameter cannot be pivotal functions (i.e., they have distributions which depend on unknown nuisance param- eters) and even cannot be usefully bounded over the space of the nuisance parameters. These results are applied to several econometric problems: inference in simultaneous equations (instrumental variables (IV) regressions), linear regressions with autoregressive errors, inference about long-run multipliers and cointegrating vectors. For example, it is shown that standard "asymptotically justified" confidence intervals based on IV estimators (such as two-stage least squares) and the associated "standard errors" have zero coverage probability, and the corresponding t statistics have distributions which cannot be bounded by any finite set of distribution functions, a result of interest for interpreting IV regressions with "weak instruments." Furthermore, expansion methods (e.g., Edgeworth expansions) and bootstrap techniques cannot solve these difficulties. Finally, in a number of cases where Wald-type methods are fundamentally flawed (e.g., IV regressions with poor instruments), it is observed that likelihood-based methods (e.g., likelihood-ratio tests and confidence sets) combined with projection techniques can easily yield valid tests and confidence sets.

567 citations

Journal ArticleDOI
TL;DR: A method has been developed for calculation of CTV-to-PTV margin size based on the assumption that the CTV should be adequately irradiated with a high probability, demonstrated to be fast and accurate for a prostate, cervix, and lung cancer case.
Abstract: Purpose: Following the ICRU-50 recommendations, geometrical uncertainties in tumor position during radiotherapy treatments are generally included in the treatment planning by adding a margin to the clinical target volume (CTV) to yield the planning target volume (PTV). We have developed a method for automatic calculation of this margin. Methods and Materials: Geometrical uncertainties of a specific patient group can normally be characterized by the standard deviation of the distribution of systematic deviations in the patient group (Σ) and by the average standard deviation of the distribution of random deviations (σ). The CTV of a patient to be planned can be represented in a 3D matrix in the treatment room coordinate system with voxel values one inside and zero outside the CTV. Convolution of this matrix with the appropriate probability distributions for translations and rotations yields a matrix with coverage probabilities (CPs) which is defined as the probability for each point to be covered by the CTV. The PTV can then be chosen as a volume corresponding to a certain iso-probability level. Separate calculations are performed for systematic and random deviations. Iso-probability volumes are selected in such a way that a high percentage of the CTV volume (on average > 99%) receives a high dose (> 95%). The consequences of systematic deviations on the dose distribution in the CTV can be estimated by calculation of dose histograms of the CP matrix for systematic deviations, resulting in a so-called dose probability histogram (DPH). A DPH represents the average dose volume histogram (DVH) for all systematic deviations in the patient group. The consequences of random deviations can be calculated by convolution of the dose distribution with the probability distributions for random deviations. Using the convolved dose matrix in the DPH calculation yields full information about the influence of geometrical uncertainties on the dose in the CTV. Results: The model is demonstrated to be fast and accurate for a prostate, cervix, and lung cancer case. A CTV-to-PTV margin size which ensures at least 95% dose to (on average) 99% of the CTV, appears to be equal to about 2Σ + 0.7σ for three all cases. Because rotational deviations are included, the resulting margins can be anisotropic, as shown for the prostate cancer case. Conclusion: A method has been developed for calculation of CTV-to-PTV margins based on the assumption that the CTV should be adequately irradiated with a high probability.

547 citations

Journal ArticleDOI
TL;DR: A new, fast, yet reliable method for the construction of PIs for NN predictions, and the quantitative comparison with three traditional techniques for prediction interval construction reveals that the LUBE method is simpler, faster, and more reliable.
Abstract: Prediction intervals (PIs) have been proposed in the literature to provide more information by quantifying the level of uncertainty associated to the point forecasts. Traditional methods for construction of neural network (NN) based PIs suffer from restrictive assumptions about data distribution and massive computational loads. In this paper, we propose a new, fast, yet reliable method for the construction of PIs for NN predictions. The proposed lower upper bound estimation (LUBE) method constructs an NN with two outputs for estimating the prediction interval bounds. NN training is achieved through the minimization of a proposed PI-based objective function, which covers both interval width and coverage probability. The method does not require any information about the upper and lower bounds of PIs for training the NN. The simulated annealing method is applied for minimization of the cost function and adjustment of NN parameters. The demonstrated results for 10 benchmark regression case studies clearly show the LUBE method to be capable of generating high-quality PIs in a short time. Also, the quantitative comparison with three traditional techniques for prediction interval construction reveals that the LUBE method is simpler, faster, and more reliable.

533 citations

Journal ArticleDOI
TL;DR: It is shown that the known issues of underestimation of the statistical error and spuriously overconfident estimates with the RE model can be resolved by the use of an estimator under the fixed effect model assumption with a quasi-likelihood based variance structure - the IVhet model.

386 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
86% related
Statistical hypothesis testing
19.5K papers, 1M citations
80% related
Linear model
19K papers, 1M citations
79% related
Markov chain
51.9K papers, 1.3M citations
79% related
Multivariate statistics
18.4K papers, 1M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
202363
2022153
2021142
2020151
2019142