scispace - formally typeset
Search or ask a question

Showing papers on "Coverage probability published in 2003"


Journal ArticleDOI
TL;DR: In this article, a conceptually different type of confidence interval is proposed, which asymptotically covers the true value of the parameter with this probability, but the exact coverage probabilities of the simplest version of their new CI do not converge to their nominal values uniformly across different values for the width of the identification region.
Abstract: Recently a growing body of research has studied inference in settings where parameters of interest are partially identified. In many cases the parameter is real-valued and the identification region is an interval whose lower and upper bounds may be estimated from sample data. For this case confidence intervals (CIs) have been proposed that cover the entire identification region with fixed probability. Here, we introduce a conceptually different type of confidence interval. Rather than cover the entire identification region with fixed probability, we propose CIs that asymptotically cover the true value of the parameter with this probability. However, the exact coverage probabilities of the simplest version of our new CIs do not converge to their nominal values uniformly across different values for the width of the identification region. To avoid the problems associated with this, we modify the proposed CI to ensure that its exact coverage probabilities do converge uniformly to their nominal values. We motivate this modified CI through exact results for the Gaussian case.

662 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a procedure for obtaining confidence intervals and tests for a single lognormal mean using the ideas of generalized p-values and generalized confidence intervals. But the procedure is computationally very involved.

176 citations


Journal ArticleDOI
TL;DR: The International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement is being increasingly recognized as a de facto international standard as mentioned in this paper, which recommends a standardized way of expressing uncertainty in all kinds of measurements and provides a comprehensive approach for combining information to evaluate that uncertainty.
Abstract: The International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement is being increasingly recognized as a de facto international standard. The ISO Guide recommends a standardized way of expressing uncertainty in all kinds of measurements and provides a comprehensive approach for combining information to evaluate that uncertainty. The ISO Guide supports uncertainties evaluated from statistical methods, Type A, and uncertainties determined by other means, Type B. The ISO Guide recommends classical (frequentist) statistics for evaluating the Type A components of uncertainty; but it interprets the combined uncertainty from a Bayesian viewpoint. This is inconsistent. In order to overcome this inconsistency, we suggest that all Type A uncertainties should be evaluated through a Bayesian approach. It turns out that the estimates from a classical statistical analysis are either equal or approximately equal to the corresponding estimates from a Bayesian analysis with non-informative prior probability distributions. So the classical (frequentist) estimates may be used provided they are interpreted from the Bayesian viewpoint. The procedure of the ISO Guide for evaluating the combined uncertainty is to propagate the uncertainties associated with the input quantities. This procedure does not yield a complete specification of the distribution represented by the result of measurement and its associated combined standard uncertainty. So the correct coverage factor for a desired coverage probability of an expanded uncertainty interval cannot always be determined. Nonetheless, the ISO Guide suggests that the coverage factor may be computed by assuming that the distribution represented by the result of measurement and its associated standard uncertainty is a normal distribution or a scaled-and-shifted t-distribution with degrees of freedom determined from the Welch–Satterthwaite formula. This assumption may be unjustified and the coverage factor so determined may be incorrect. A popular convention is to set the coverage factor as 2. When the distribution represented by the result of measurement and its associated standard uncertainty is not completely determined, the 2-standard-uncertainty interval may be interpreted in terms of its minimum coverage probability for an applicable class of probability distributions.

173 citations


Journal ArticleDOI
Holger Drees1
TL;DR: In this article, the asymptotic normality of a class of estimators for extreme quantiles is established under mild structural conditions on the observed stationary β-mixing time series.
Abstract: The asymptotic normality of a class of estimators for extreme quantiles is established under mild structural conditions on the observed stationary \beta-mixing time series. Consistent estimators of the asymptotic variance are introduced, which render possible the construction of asymptotic confidence intervals for the extreme quantiles. Moreover, it is shown that many well-known time series models satisfy our conditions. Then the theory is applied to a time series of returns of a stock index. Finally, the finite sample behavior of the proposed confidence intervals is examined in a simulation study. It turns out that for most time series models under consideration the actual coverage probability is pretty close to the nominal level if the sample fraction used for estimation is chosen appropriately.

156 citations


Journal ArticleDOI
TL;DR: Recommendations for selecting an interval in three situations—when one needs to guarantee a lower bound on a coverage probability, when it is sufficient to have actual coverage probability near the nominal level, and when teaching in a classroom or consulting environment are described.
Abstract: 'Exact' methods for categorical data are exact in terms of using probability distributions that do not depend on unknown parameters. However, they are conservative inferentially. The actual error probabilities for tests and confidence intervals are bounded above by the nominal level. This article examines the conservatism for interval estimation and describes ways of reducing it. We illustrate for confidence intervals for several basic parameters, including the binomial parameter, the difference between two binomial parameters for independent samples, and the odds ratio and relative risk. Less conservative behavior results from devices such as (1) inverting tests using statistics that are 'less discrete', (2) inverting a single two-sided test rather than two separate one-sided tests each having size at least half the nominal level, (3) using unconditional rather than conditional methods (where appropriate) and (4) inverting tests using alternative p-values. The article concludes with recommendations for selecting an interval in three situations-when one needs to guarantee a lower bound on a coverage probability, when it is sufficient to have actual coverage probability near the nominal level, and when teaching in a classroom or consulting environment.

86 citations


01 Jan 2003
TL;DR: In this paper, interval estimation of the mean in the natural exponential family with a quadratic variance function is considered. But the results and addi- tional computation suggest that the equal tailed Jereys interval and the likelihood ratio interval are the best overall alternatives to the Wald interval.
Abstract: In this paper we consider interval estimation of the mean in the natural Exponential family with a quadratic variance function; the family comprises the binomial, Poisson, negative binomial, normal, gamma, and a sixth distribution. For the three discrete cases, the Wald condence interval and three alternative intervals are examined by means of two term Edgeworth expansions of the coverage probability and a two term expansion of the expected length. The results and addi- tional computation suggest that the equal tailed Jereys interval and the likelihood ratio interval are the best overall alternatives to the Wald interval. We also show that the poor performance of the Wald interval is not limited to the discrete cases, and a serious negative bias occurs in the nonnormal continuous cases as well. The results are complemented by various illustrative examples.

65 citations


Journal ArticleDOI
Lutz Dümbgen1
TL;DR: In this article, the authors construct confidence bands for f with guaranteed given coverage probability, assuming that f is isotonic or convex, and show that these confidence bands are computationally feasible and shown to be asymptotically sharp optimal in an appropriate sense.
Abstract: Let Y be a stochastic process on [0,1] satisfying dY(t)=n 1/2 f(t)dt+dW(t) , where n≥1 is a given scale parameter (`sample size'), W is standard Brownian motion and f is an unknown function. Utilizing suitable multiscale tests, we construct confidence bands for f with guaranteed given coverage probability, assuming that f is isotonic or convex. These confidence bands are computationally feasible and shown to be asymptotically sharp optimal in an appropriate sense.

55 citations


Journal ArticleDOI
TL;DR: In this article, the authors compare the empirical coverage probability of confidence intervals based on both the standard normal distribution and the t-distribution, in conjunction with several methods of estimating the heterogeneity variance for a standardized mean difference.
Abstract: Under the random effects model for meta-analysis, confidence intervals for the overall effect are typically constructed using quantiles of the standard normal distribution. We discuss confidence intervals based on both the standard normal distribution and the t-distribution, in conjunction with several methods of estimating the heterogeneity variance for a standardized mean difference, and we compare the empirical coverage probabilities of the intervals using simulation. The coverage probabilities of intervals based on an approximate t-statistic are higher than the coverage probabilities for the standard normal intervals, and are very close to the specified confidence level even for small meta-analysis sample size. Moreover, intervals based on the approximate t-statistic appear relatively robust to different methods of estimating the heterogeneity variance, unlike the normal intervals. Thus, we conclude that confidence intervals based on the t-statistic are superior to the standard normal confide...

48 citations


Journal ArticleDOI
TL;DR: In this article, the estimates of knots locations and coefficients are obtained through a non-linear least squares solution that corresponds to the maximum likelihood estimate, and confidence intervals are then constructed based on the asymptotic distribution of the estimator.
Abstract: The estimates of knot locations and coefficients are obtained through a non-linear least squares solution that corresponds to the maximum likelihood estimate. Confidence intervals are then constructed based on the asymptotic distribution of the maximum likelihood estimator. Average coverage probabilities and the accuracy of the estimate are examined via simulation. This includes comparisons between our method and some existing methods such as smoothing spline and variable knots selection as well as a Bayesian version of the variable knots method. Simulation results indicate that our method works well for smooth underlying functions and also reasonably well for discontinuous functions. It also performs well for fairly small sample sizes.

45 citations


Journal ArticleDOI
12 Apr 2003
TL;DR: In this paper, it is shown that record values can be used to provide distribution-free confidence intervals for population quantiles and tolerance intervals, and universal upper bounds for the expectation of the length of the confidence intervals are derived.
Abstract: In a number of situations only observations that exceed or only those that fall below the current extreme value are recorded. Examples include meteorology, hydrology, athletic events and mining. Industrial stress testing is also an example in which only items that are weaker than all the observed items are destroyed. In this paper, it is shown that, how record values can be used to provide distribution-free confidence intervals for population quantiles and tolerance intervals. We provide some tables that help one choose the appropriate record values and present a numerical example. Also universal upper bounds for the expectation of the length of the confidence intervals are derived. The results may be of interest in situation where only record values are stored.

42 citations


Journal ArticleDOI
TL;DR: In this paper, the size distortions of tests for structural parameters in the simultaneous equations model were fixed by computing critical value functions based on the conditional distribution of test statistics, which can then be used to construct informative confidence regions for the structural parameter with correct coverage probability.
Abstract: This paper fixes size distortions of tests for structural parameters in the simultaneous equations model by computing critical value functions based on the conditional distribution of test statistics. The conditional tests can then be used to construct informative confidence regions for the structural parameter with correct coverage probability. Commands to implement these tests in Stata are also introduced.

Journal ArticleDOI
TL;DR: The modified signed log‐likelihood ratio method produces a confidence interval with a nearly exact coverage probability and highly accurate and symmetric error probabilities even for extremely small sample sizes.
Abstract: To construct a confidence interval for the mean of a log-normal distribution in small samples, we propose likelihood-based approaches - the signed log-likelihood ratio and modified signed log-likelihood ratio methods. Extensive Monte Carlo simulation results show the advantages of the modified signed log-likelihood ratio method over the signed log-likelihood ratio method and other methods. In particular, the modified signed log-likelihood ratio method produces a confidence interval with a nearly exact coverage probability and highly accurate and symmetric error probabilities even for extremely small sample sizes. We then apply the methods to two sets of real-life data.

Journal ArticleDOI
TL;DR: In this article, the authors construct a point estimate and a confidence interval that are motivated by an adaptive test statistic, and the estimator is consistent for the treatment effect and the confidence interval asymptotically has correct coverage probability.
Abstract: In a comparative clinical trial, if the maximum information is adjusted on the basis of unblinded data, the usual test statistic should be avoided due to possible type I error inflation. An adaptive test can be used as an alternative. The usual point estimate of the treatment effect and the usual confidence interval should also be avoided. In this article, we construct a point estimate and a confidence interval that are motivated by an adaptive test statistic. The estimator is consistent for the treatment effect and the confidence interval asymptotically has correct coverage probability.

Journal ArticleDOI
Nader Tajvidi1
01 Jun 2003-Extremes
TL;DR: In this paper, the empirical coverage of some standard bootstrap and likelihood-based confidence intervals for the parameters and upper p-quantiles of the generalized Pareto distribution (GPD) is compared.
Abstract: The generalized Pareto distribution (GPD) is a two-parameter family of distributions which can be used to model exceedances over a threshold. We compare the empirical coverage of some standard bootstrap and likelihood-based confidence intervals for the parameters and upper p-quantiles of the GPD. Simulation results indicate that none of the bootstrap methods give satisfactory intervals for small sample sizes. By applying a general method of D. N. Lawley, correction factors for likelihood ratio statistics of parameters and quantiles of the GPD have been calculated. Simulations show that for small sample sizes accuracy of confidence intervals can be improved by incorporating the computed correction factors to the likelihood-based confidence intervals. While the modified likelihood method has better empirical coverage probability, the mean length of produced intervals are not longer than corresponding bootstrap confidence intervals. This article also investigates the performance of some bootstrap methods for estimation of accuracy measures of maximum likelihood estimators of parameters and quantiles of the GPD.

Journal ArticleDOI
TL;DR: In this paper, the size distortions of tests for struc- tural parameters in the simultaneous equations model by computing critical value functions based on the conditional distribution of test statistics are investigated.
Abstract: In this paper, we propose a “x to the size distortions of tests for struc- tural parameters in the simultaneous equations model by computing critical value functions based on the conditional distribution of test statistics. The conditional tests can then be used to construct informative con“dence regions for the struc- tural parameter with correct coverage probability. Commands to implement these tests in Stata are also introduced. Together with the Anderson…Rubin ( 1949 )a n d score tests, the conditional Wald and likelihood-ratio tests can be used to construct con“dence intervals that have correct coverage probability even when instruments may be weak and that are informative when instruments are good. The regions based on the conditional Wald test necessarily contain the 2SLS estimator, while the ones based on the conditional likelihood-ratio and score tests are centered around the limited-information maximum likelihood (LIML) e stimator. Therefore, con“dence regions based on these tests can be used as reliable evidence of the accuracy of commonly used estimators. In Section 2 ,e x act results are developed for the two-equation model under the assumption that the reduced-form disturbances are normally distributed with a known

Journal ArticleDOI
TL;DR: In this article, the authors explore two proposals for finding empirical Bayes prediction intervals under a normal regression model and compare the coverage probabilities and expected lengths of such intervals via appropriate higher-order asymptotics.

Proceedings ArticleDOI
07 Dec 2003
TL;DR: An automated wavelet-based spectral method for constructing an approximate confidence interval on the steady-state mean of a simulation output process that satisfies user-specified requirements on absolute or relative precision as well as coverage probability.
Abstract: We develop an automated wavelet-based spectral method for constructing an approximate confidence interval on the steady-state mean of a simulation output process. This procedure, called WASSP, determines a batch size and a warm-up period beyond which the computed batch means form an approximately stationary Gaussian process. Based on the log-smoothed-periodogram of the batch means, WASSP uses wavelets to estimate the batch means log-spectrum and ultimately the steady-state variance constant (SSVC) of the original (unbatched) process. WASSP combines the SSVC estimator with the grand average of the batch means in a sequential procedure for constructing a confidence-interval estimator of the steady-state mean that satisfies user-specified requirements on absolute or relative precision as well as coverage probability. An extensive performance evaluation provides evidence of WASSP's robustness in comparison with some other output analysis methods.

Journal ArticleDOI
TL;DR: Simulation results indicate that the new procedure is preferable to all its competitors in most cases.

Journal ArticleDOI
TL;DR: In this article, the authors proposed two parametric bootstrap methods to incorporate the variability of the corresponding parameter estimators, and evaluated the coverage probability of these proposed methods in a real dataset and compared the results with those from naive (i.e., treating estimated parameters as known) and Bayesian methods.
Abstract: In spatial predictions, researchers usually treat the estimated theoretical variogram parameters as known without error and ignore the variability of the parameter estimators. Although the prediction is still unbiased, the prediction error is usually underestimated. Therefore, the coverage probability of the prediction interval usually is lower than the nominal probability. A simulation study is performed to show how the coverage probability for prediction relates to the true range and sill of an exponential variogram. This article proposes two parametric bootstrap methods to incorporate the variability of the corresponding parameter estimators. A simulation study is performed to evaluate the coverage probability of these proposed methods. Finally, we apply the parametric bootstrap methods to a real dataset and compare the results with those from naive (i.e., treating estimated parameters as known) and Bayesian methods.

Journal ArticleDOI
TL;DR: These new “nonparametric sampling” inferential methods are found to provide more than the nominal coverage probability for lower confidence bounds regardless of sample size, and to be surprisingly efficient relative to the Central Limit Theorem bounds in settings where overpayments are essentially all-or-nothing and where the payment population is relatively homogeneous and well separated from zero.
Abstract: Random sampling of paid Medicare claims has been a legally acceptable approach for investigating suspicious billing practices by health care providers (e.g. physicians, hospitals, medical equipment and supplies providers, etc.) since 1986. A population of payments made to a given provider during a given time frame is isolated and a probability sample selected for investigation. For each claim or claim detail line, the overpayment is defined to be the amount paid minus the amount that should have been paid, given all evidence collected by the investigator. Current procedures stipulate that, using the probability sample’s observed overpayments, a 90% lower confidence bound for the total overpayment over the entire population is to be used as a recoupment demand to the provider. It is not unusual for these recoupment demands to exceed a million dollars. It is also not unusual for the statistical methods used in sampling and calculating the recoupment demand to be challenged in court. Though it is quite conservative in most settings, for certain types of overpayment populations the standard method for computing a lower confidence bound on the population total, based on the Central Limit Theorem, can fail badly even at relatively large sample sizes. Here, we develop “nonparametric sampling” inferential methods using simple random samples and the hypergeometric distribution, and study their performance on four real payment populations. These new methods are found to provide more than the nominal coverage probability for lower confidence bounds regardless of sample size, and to be surprisingly efficient relative to the Central Limit Theorem bounds in settings where overpayments are essentially all-or-nothing and where the payment population is relatively homogeneous and well separated from zero. The new methods are especially well-suited for sampling payment populations for providers of motorized wheelchairs, which at the time of this article’s submission was a national crisis. Extensions to stratified random samples and to settings where there are frequent partial overpayments are discussed.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the problem of computing an uncertainty interval for a measurand θ having a prescribed confidence level of 1 − α, and developed a highly accurate approximation for the coverage probability associated with the interval [Y − kuy,Y + kuy].
Abstract: A measurand θ of interest is the ratio of two other quantities, µp and µq. A measurement experiment is conducted and results P and Q are obtained as estimates of µp and µq. The ratio Y = P/ Qis generally reported as the result for the measurand θ . In this paper we consider the problem of computing an uncertainty interval for θ having a prescribed confidence level of 1 − α. Although an exact procedure, based on an approach due to Fieller, is available for this problem, it is well known that this procedure can lead to unbounded confidence regions in certain situations. As a result, practitioners often use various non-exact methods. One such non-exact method is based on the propagation-of-errors approach described in the ISO Guide to the Expression of Uncertainty in Measurement to calculate a standard uncertainty uy for Y . A confidence interval for θ with a presumed confidence level of 95% is obtained as [Y − 2uy ,Y +2 uy]. In this paper we develop a highly accurate approximation for the coverage probability associated with the interval [Y − kuy ,Y + kuy]. In particular, we demonstrate that, using n − 1 degrees of freedom for uy, and the corresponding Student’s t coverage factor k = t1−α/ 2: n−1 rather than k = 2, leads to uncertainty intervals [Y − t1−α/ 2: n−1uy ,Y + t1−α/ 2: n−1uy], that are nearly identical to Fieller’s exact intervals whenever the measurement relative uncertainties are small, as is the case in most metrological applications. In addition, they are easy to compute and may be recommended for routine use in metrological applications. Improved coverage factors k can be derived based on the results of this paper for those exceptional situations where the t-interval may not have coverage probability sufficiently close to the desired value.

Journal ArticleDOI
TL;DR: In this paper, the authors compared the coverage probability and the average length for Woolfs logit interval estimator, Gart's logit intervals estimator of adding 0.50, and Cornfield's interval estimators with the continuity correction, and without the continuity corrections in a variety of situations.
Abstract: It is well known that Cornfield's confidence interval of the odds ratio with the continuity correction can mimic the performance of the exact method. Furthermore, because the calculation procedure of using the former is much simpler than that of using the latter, Cornfield's confidence interval with the continuity correction is highly recommended by many publications. However, all these papers that draw this conclusion are on the basis of examining the coverage probability exclusively. The efficiency of the resulting confidence intervals is completely ignored. This paper calculates and compares the coverage probability and the average length for Woolfs logit interval estimator, Gart's logit interval estimator of adding 0.50, Cornfield's interval estimator with the continuity correction, and Cornfield's interval estimator without the continuity correction in a variety of situations. This paper notes that Cornfield's interval estimator with the continuity correction is too conservative, while Cornfield's method without the continuity correction can improve efficiency without sacrificing the accuracy of the coverage probability. This paper further notes that when the sample size is small (say, 20 or 30 per group) and the probability of exposure in the control group is small (say, 0.10) or large (say, 0.90), using Cornfield's method without the continuity correction is likely preferable to all the other estimators considered here. When the sample size is large (say, 100 per group) or when the probability of exposure in the control group is moderate (say, 0.50), Gart's logit interval estimator is probably the best.

Proceedings ArticleDOI
09 Dec 2003
TL;DR: An explicit formula for constructing the confidence interval of binomial parameter with guaranteed coverage probability is derived, which overcomes the limitation of normal approximation which is asymptotic in nature and thus inevitably introduce unknown errors in applications.
Abstract: In this paper, we develop efficient randomized algorithms for estimating probabilistic robustness margin and constructing robustness degradation curve for uncertain dynamic systems One remarkable feature of these algorithms is their universal applicability to robustness analysis problems with arbitrary robustness requirements and uncertainty bounding set In contrast to existing probabilistic methods, our approach does not depend on the feasibility of computing deterministic robustness margin We have developed efficient methods such as probabilistic comparison, probabilistic bisection and backward iteration to facilitate the computation In particular, confidence interval for binomial random variables has been frequently used in the estimation of probabilistic robustness margin and in the accuracy evaluation of estimating robustness degradation function Motivated by the importance of fast computation of binomial confidence interval in the context of probabilistic robustness analysis, we have derived an explicit formula for constructing the confidence interval of binomial parameter with guaranteed coverage probability The formula overcomes the limitation of normal approximation which is asymptotic in nature and thus inevitably introduce unknown errors in applications Moreover, the formula is extremely simple and very tight in comparison with classic Clopper-Pearson's approach

Journal ArticleDOI
TL;DR: In this article, the authors present confidence intervals that are correct when conditioning on the subset of data for which a trial stopped at a particular analysis, and then use conditional coverage probabilities to compare the sample mean, stagewise, and repeated confidence intervals.
Abstract: The work of Fisher (1959) and Buehler (1959) discuss the importance of conditioning on recognizable subsets of the sample space. The stopping time yields an easily identifiable partition of the sample space when considering group sequential testing. We first present confidence intervals that are correct when conditioning on the subset of data for which a trial stopped at a particular analysis. These intervals have very desirable properties for observations that are highly unusual (given any value of the mean). In addition, they provide insight into how information about the mean is distributed between the two sufficient statistics. We then use conditional coverage probabilities to compare the sample mean, stagewise, and repeated confidence intervals. We find that none of these intervals outperforms the others when conditioning on stopping time, and no interval is a uniformly acceptable performer.

Journal ArticleDOI
TL;DR: In this article, a score test of hypotheses pertaining to the marginal and conditional probabilities in a 2 × 2 table with structural zero via the risk/rate difference measure was proposed, and the performance of the score test and the existing likelihood ratio test was evaluated.
Abstract: In some infectious disease studies and 2-step treatment studies, 2 × 2 table with structural zero could arise in situations where it is theoretically impossible for a particular cell to contain observations or structural void is introduced by design. In this article, we propose a score test of hypotheses pertaining to the marginal and conditional probabilities in a 2 × 2 table with structural zero via the risk/rate difference measure. Score test-based confidence interval will also be outlined. We evaluate the performance of the score test and the existing likelihood ratio test. Our empirical results evince the similar and satisfactory performance of the two tests (with appropriate adjustments) in terms of coverage probability and expected interval width. Both tests consistently perform well from small- to moderate-sample designs. The score test however has the advantage that it is only undefined in one scenario while the likelihood ratio test can be undefined in many scenarios. We illustrate our method by a real example from a two-step tuberculosis skin test study.

Journal ArticleDOI
TL;DR: The Buehler 1−α upper confidence limit is as small as possible, subject to the constraints that its coverage probability never falls below 1 −α and that it is a non-decreasing function of a designated statistic T as discussed by the authors.

Journal ArticleDOI
TL;DR: The author proposed a closed-form estimator for sigma2 and showed analytically that the difference between the effective and nominal levels of significance is negligible and that the power exceeds 1-beta when the initial sample size is large.
Abstract: In clinical trials, one of the main questions that is being asked is how many additional observations, if any, are needed beyond those originally planned. In a two-treatment double-blind clinical experiment, one is interested in testing the null hypothesis of equality of the means against one-sided alternative when the common variance sigma2 is unknown. We wish to determine the required total sample size when the error probabilities alpha and beta are specified at a predetermined alternative. Shih provided a two-stage procedure which is an extension of Stein's one-sample procedure, assuming normal response. He estimates sigma2 by the method of maximum likelihood via the EM algorithm and carries out a simulation study in order to evaluate the effective level of significance and the power. The author proposed a closed-form estimator for sigma2 and showed analytically that the difference between the effective and nominal levels of significance is negligible and that the power exceeds 1-beta when the initial sample size is large. Here we consider responses from arbitrary distributions in which the mean and the variance are not functionally related and show that when the initial sample size is large, the conclusions drawn previously by the author still hold. The effective coverage probability of a fixed-width interval is also evaluated. Proofs of certain assertions are deferred to the Appendix.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of estimating the standard interval in the presence of rounding and show that the nominal confidence levels of standard intervals for μ and σ are much larger than actual coverage probabilities when the rounding is severe.
Abstract: In standard statistical analyses, data are assumed to be essentially exact. But indeed they are often obtained from a relatively crude gaging method and are thus intrinsically “rounded” to some nearest unit. The discussions in Lee and Vardeman (Lee, Chiang-Sheng, Vardeman, Stephen B. (2001). Interval estimation of a normal process mean from rounded data. Journal of Quality Technology 33:335–348.) and Lee and Vardeman (Lee, Chiang-Sheng, Vardeman, Stephen B. (2002). Interval estimation of a normal process standard deviation from rounded data. Communications in Statistics 31:13–34.) for a rounded sample from a single normal distribution established that nominal confidence levels of standard intervals for μ and σ are much larger than actual coverage probabilities when the rounding is severe. In this article we consider interval estimation in the balanced normal one-way random effects model. We demonstrate the deficiency of standard interval estimators in the presence of rounding and show how likelih...

Journal ArticleDOI
TL;DR: In this paper, a general analytical method for the probabilistic evaluation of power system transient stability is discussed and a new statistical inference approach for this evaluation is proposed in particular, the transient stability probability (TSP) is defined and evaluated by taking into account the random nature of both the system loads and the fault clearing times.

Journal ArticleDOI
TL;DR: This paper investigates the performance of three different types of confidence regions, with asymptotically correct coverage probability as the number of pedigrees grows, and shows that the expected length of the confidence region is inversely proportional to the slope‐to‐noise ratio.
Abstract: When statistical linkage to a certain chromosomal region has been found, it is of interest to develop methods quantifying the accuracy with which the disease locus can be mapped. In this paper, we investigate the performance of three different types of confidence regions, with asymptotically correct coverage probability as the number of pedigrees grows. Our setup is that of a saturated map of marker data. We allow for arbitrary combinations of pedigree structures, and treat various kinds of genetic models (e.g. binary and quantitative phenotypes) in a unified way. The linkage scores are weighted sums of the individual family scores, with NPL and lod scores as special cases. We show that the expected length of the confidence region is inversely proportional to the slope-to-noise ratio, or equivalently, inversely proportional to the product of the square of the noncentrality parameter and a certain normalized slope-to-noise ratio. Our investigations reveal that maximal expected linkage scores can be quite different from estimation-based performance criteria based on expected length of confidence regions. The main reason is that there is no simple relationship between peak height and peak slope of the mean linkage score. One application of our results is planning of linkage studies: given a certain genetic model, we can approximate the number of pedigrees needed to obtain a confidence region with given coverage probability and expected length.