scispace - formally typeset
Search or ask a question
Topic

Coverage probability

About: Coverage probability is a research topic. Over the lifetime, 2479 publications have been published within this topic receiving 53259 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors proposed a method to compute exact confidence intervals based on the p-value to order the sample space in early phase clinical trials, where the original critical values for the study design are no longer valid for making proper statistical inference.
Abstract: Simon's two-stage design has been widely used in early phase clinical trials to assess the activity of a new investigated treatment. In practice, the actual sample sizes do not always follow the study design precisely, especially in the second stage. When over- or under-enrollment occurs in a study, the original critical values for the study design are no longer valid for making proper statistical inference in a clinical trial. The hypothesis for such studies is always one-sided, and the null hypothesis is rejected when only a few responses are observed. Therefore, a one-sided lower interval is suitable to test the hypothesis. The commonly used approaches for confidence interval construction are based on asymptotic approaches. These approaches generally do not guarantee the coverage probability. For this reason, Clopper-Pearson approach can be used to compute exact confidence intervals. This approach has to be used in conjunction with a method to order the sample space. The frequently used method is based on point estimates for the response rate, but this ordering has too many ties which lead to conservativeness of the exact intervals. We propose developing exact one-sided intervals based on the p-value to order the sample space. The proposed approach outperforms the existing asymptotic and exact approaches. Therefore, it is recommended for use in practice.

25 citations

Journal ArticleDOI
TL;DR: An extensive literature review on the estimation of the statistical cut point is conducted and the actual coverage probability for the lower confidence limit of a normal percentile using approximate normal method is much larger than the required confidence level with a small number of assays conducted in practice.
Abstract: The cut point of the immunogenicity screening assay is the level of response of the immunogenicity screening assay at or above which a sample is defined to be positive and below which it is defined to be negative. The Food and Drug Administration Guidance for Industry on Assay Development for Immunogenicity Testing of Therapeutic recommends the cut point to be an upper 95 percentile of the negative control patients. In this article, we assume that the assay data are a random sample from a normal distribution. The sample normal percentile is a point estimate with a variability that decreases with the increase of sample size. Therefore, the sample percentile does not assure at least 5% false-positive rate (FPR) with a high confidence level (e.g., 90%) when the sample size is not sufficiently enough. With this concern, we propose to use a lower confidence limit for a percentile as the cut point instead. We have conducted an extensive literature review on the estimation of the statistical cut point and compare several selected methods for the immunogenicity screening assay cut-point determination in terms of bias, the coverage probability, and FPR. The selected methods evaluated for the immunogenicity screening assay cut-point determination are sample normal percentile, the exact lower confidence limit of a normal percentile (Chakraborti and Li, 2007) and the approximate lower confidence limit of a normal percentile. It is shown that the actual coverage probability for the lower confidence limit of a normal percentile using approximate normal method is much larger than the required confidence level with a small number of assays conducted in practice. We recommend using the exact lower confidence limit of a normal percentile for cut-point determination.

25 citations

Journal ArticleDOI
TL;DR: In this article, a random-parameter bivariate zero-inflated negative binomial (RBZINB) regression model was proposed for analyzing the effects of investigated variables on crash frequencies.
Abstract: This paper proposes a random-parameter bivariate zero-inflated negative binomial (RBZINB) regression model for analyzing the effects of investigated variables on crash frequencies. A Bayesian approach is employed as the estimation method, which has the strength of accounting for the uncertainties related to models and parameter values. The modeling framework has been applied to the bivariate injury crash counts obtained from 1000 intersections in Tennessee over a five-year period. The results reveal that the proposed RBZINB model outperforms other investigated models and provides a superior fit. The proposed RBZINB model is useful in gaining new insights into how crash occurrences are influenced by the risk factors. In addition, the empirical studies show that the proposed RBZINB model has a smaller prediction bias and variance, as well as more accurate coverage probability in estimating model parameters and crash-free probabilities.

25 citations

Journal ArticleDOI
Paul Kabaila1
TL;DR: A new Monte Carlo simulation estimator of the coverage probability, which uses conditioning for variance reduction, is derived and presented, which provides an upper bound on the minimum coverage probability of the naive confidence interval.
Abstract: Summary This paper considers a linear regression model with regression parameter vector β. The parameter of interest is θ= aTβ where a is specified. When, as a first step, a data-based variable selection (e.g. minimum Akaike information criterion) is used to select a model, it is common statistical practice to then carry out inference about θ, using the same data, based on the (false) assumption that the selected model had been provided a priori. The paper considers a confidence interval for θ with nominal coverage 1 - α constructed on this (false) assumption, and calls this the naive 1 - α confidence interval. The minimum coverage probability of this confidence interval can be calculated for simple variable selection procedures involving only a single variable. However, the kinds of variable selection procedures used in practice are typically much more complicated. For the real-life data presented in this paper, there are 20 variables each of which is to be either included or not, leading to 220 different models. The coverage probability at any given value of the parameters provides an upper bound on the minimum coverage probability of the naive confidence interval. This paper derives a new Monte Carlo simulation estimator of the coverage probability, which uses conditioning for variance reduction. For these real-life data, the gain in efficiency of this Monte Carlo simulation due to conditioning ranged from 2 to 6. The paper also presents a simple one-dimensional search strategy for parameter values at which the coverage probability is relatively small. For these real-life data, this search leads to parameter values for which the coverage probability of the naive 0.95 confidence interval is 0.79 for variable selection using the Akaike information criterion and 0.70 for variable selection using Bayes information criterion, showing that these confidence intervals are completely inadequate.

25 citations

Journal ArticleDOI
TL;DR: In this paper, a class of confidence sets with constant coverage probability for the mean of a p-variate normal distribution is proposed through a pseudo-empirical-Bayes construction.
Abstract: A class of confidence sets with constant coverage probability for the mean of a p-variate normal distribution is proposed through a pseudo-empirical-Bayes construction. When the dimension is greater than 2, by combining analytical results with some exact numerical calculations the proposed sets are proved to have a uniformly smaller volume than the usual confidence region. Sufficient conditions for the connectedness of the proposed confidence sets are also derived. In addition, our confidence sets could be used to construct tests for point null hypotheses. The resultant tests have convex acceptance regions and hence are admissible by Birnbaum. Tabular results of the comparison between the proposed region and other confidence sets are also given.

25 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
86% related
Statistical hypothesis testing
19.5K papers, 1M citations
80% related
Linear model
19K papers, 1M citations
79% related
Markov chain
51.9K papers, 1.3M citations
79% related
Multivariate statistics
18.4K papers, 1M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
202363
2022153
2021142
2020151
2019142