Topic
Coverage probability
About: Coverage probability is a research topic. Over the lifetime, 2479 publications have been published within this topic receiving 53259 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The jackknife empirical likelihood (JEL) method is employed to construct confidence intervals for the difference of two correlated continuous-scale ROC curves to avoid estimating several nuisance variables which have to be estimated in the existing methods.
28 citations
••
TL;DR: A graphic indicator of the receiver operating characteristic curve of prediction interval (ROC-PI) based on the definition of the ROC curve which can depict the trade-off between the PI width and PI coverage probability across a series of cut-off points is designed.
Abstract: Effective anomaly detection of sensing data is essential for identifying potential system failures. Because they require no prior knowledge or accumulated labels, and provide uncertainty presentation, the probability prediction methods (e.g., Gaussian process regression (GPR) and relevance vector machine (RVM)) are especially adaptable to perform anomaly detection for sensing series. Generally, one key parameter of prediction models is coverage probability (CP), which controls the judging threshold of the testing sample and is generally set to a default value (e.g., 90% or 95%). There are few criteria to determine the optimal CP for anomaly detection. Therefore, this paper designs a graphic indicator of the receiver operating characteristic curve of prediction interval (ROC-PI) based on the definition of the ROC curve which can depict the trade-off between the PI width and PI coverage probability across a series of cut-off points. Furthermore, the Youden index is modified to assess the performance of different CPs, by the minimization of which the optimal CP is derived by the simulated annealing (SA) algorithm. Experiments conducted on two simulation datasets demonstrate the validity of the proposed method. Especially, an actual case study on sensing series from an on-orbit satellite illustrates its significant performance in practical application.
28 citations
••
TL;DR: N-Skart is a nonsequential procedure designed to deliver a confidence interval (CI) for the steady-state mean of a simulation output process when the user supplies a single simulation-generated time series of arbitrary size and specifies the required coverage probability for a CI based on that data set.
Abstract: We discuss N-Skart, a nonsequential procedure designed to deliver a confidence interval (CI) for the steady-state mean of a simulation output process when the user supplies a single simulation-generated time series of arbitrary size and specifies the required coverage probability for a CI based on that data set N-Skart is a variant of the method of batch means that exploits separate adjustments to the half-length of the CI so as to account for the effects on the distribution of the underlying Student's t-statistic that arise from skewness (nonnormality) and autocorrelation of the batch means If the sample size is sufficiently large, then N-Skart delivers not only a CI but also a point estimator for the steady-state mean that is approximately free of initialization bias In an experimental performance evaluation involving a wide range of test processes and sample sizes, N-Skart exhibited close conformance to the user-specified CI coverage probabilities
28 citations
••
TL;DR: This paper argued that the poor coverage properties claimed by Santner et al. (2007) actually relate to an inferior version of the score interval (Mee, 1984) and that it is appropriate to align mean rather than minimum coverage with 1−−α, based on a moving average representation of the coverage probability.
Abstract: A recent article (Santner et al., 2007) asserted that a score interval for a difference of independent binomial proportions (Miettinen and Nurminen, 1985) may have inadequate coverage. We re-visit the properties of score intervals for binomial proportions and their differences. Published data indicate these methods produce mean coverage slightly above the nominal confidence level 1 − α. We argue it is appropriate to align mean rather than minimum coverage with 1 − α, based on a moving average representation of the coverage probability. The poor coverage properties claimed by Santner et al. (2007) actually relate to an inferior version of the score interval (Mee, 1984).
28 citations
••
TL;DR: This paper focuses on using the jackknife, the adjusted and the extended jackknife empirical likelihood methods to construct confidence intervals for the mean absolute deviation of a random variable and the results of simulation study show the comparison of the average length and coverage probability by using jackknife theoretical likelihood methods and normal approximation method.
28 citations