scispace - formally typeset
Search or ask a question

Showing papers on "Coverage probability published in 1992"


Journal ArticleDOI
TL;DR: In this article, the authors proposed to approximate simultaneous confidence intervals by approximating R with the closest one-factor structure correlation matrix, which is the correlation matrix R of the estimators of Θ 1, Θ 2, Θ 3.
Abstract: Consider the general linear model (GLM) Y = Xβ + e. Suppose Θ1,…, Θk, a subset of the β's, are of interest; Θ1,…, Θk may be treatment contrasts in an ANOVA setting or regression coefficients in a response surface setting. Existing simultaneous confidence intervals for Θ1,…, Θk are relatively conservative or, in the case of the MEANS option in PROC GLM of SAS, possibly misleading. The difficulty is with the multidimensionality of the integration required to compute exact coverage probability when X does not follow a nice textbook design. Noting that such exact coverage probabilities can be computed if the correlation matrix R of the estimators of Θ1, …, Θk has a one-factor structure in the factor analytic sense, it is proposed that approximate simultaneous confidence intervals be computed by approximating R with the closest one-factor structure correlation matrix. Computer simulations of hundreds of randomly generated designs in the settings of regression, analysis of covariance, and unbalanced bl...

107 citations


Journal ArticleDOI
TL;DR: In this article, a Monte Carlo experiment was conducted to compare the performance of Rao's and Kleijnen's cross-validation methods under the normality assumption and the uniform distribution assumption.
Abstract: Linear regression analysis is important in many fields. In the analysis of simulation results, a regression metamodel can be applied, even when common pseudorandom numbers are used. To test the validity of the specified regression model, Rao 1959 generalized the F statistic for lack of fit, whereas Kleijnen 1983 proposed a cross-validation procedure using a Student's t statistic combined with Bonferroni's inequality. This paper reports on an extensive Monte Carlo experiment designed to compare these two methods. Under the normality assumption, cross-validation is conservative, whereas Rao's test realizes its nominal type I error and has high power. Robustness is investigated through lognormal and uniform distributions. When simulation responses are distributed lognormally, then cross-validation using Ordinary Least Squares is the only technique that has acceptable type I error. Uniform distributions give results similar to the normal case. Once the regression model is validated, confidence intervals for the individual regression parameters are computed. The Monte Carlo experiment compares several confidence interval procedures. Under normality, Rao's procedure is preferred since it has good coverage probability and acceptable half-length. Under lognormality, Ordinary Least Squares achieves nominal coverage probability. Uniform distributions again give results similar to the normal case.

71 citations


Journal ArticleDOI
TL;DR: It is found that for small samples, the bias of the variance parameter estimator figures significantly in CIE coverage performance—the less bias the better.
Abstract: We investigate the small-sample behavior and convergence properties of confidence interval estimators (CIEs) for the mean of a stationary discrete process. We consider CIEs arising from nonoverlapping batch means, overlapping batch means, and standardized time series, all of which are commonly used in discrete-event simulation. The performance measures of interest are the coverage probability, and the expected value and variance of the half-length. We use empirical and analytical methods to make detailed comparisons regarding the behavior of the CIEs for a variety of stochastic processes. All the CIEs under study are asymptotically valid; however, they are usually invalid for small sample sizes. We find that for small samples, the bias of the variance parameter estimator figures significantly in CIE coverage performance—the less bias the better. A secondary role is played by the marginal distribution of the stationary process. We also point out that some CIEs require fewer observations before manifesting ...

69 citations


Journal Article
TL;DR: In this paper, a simulation for a hypothetical population based on data reported in the literature from three sources was presented, and the simulated nominal 95 per cent confidence intervals contained the modelled population size only 30 per cent of the time.
Abstract: One encounters in the literature estimates of some rates of genetic and congenital disorders based on log-linear methods to model possible interactions among sources. Often the analyst chooses the simplest model consistent with the data for estimation of the size of a closed population and calculates confidence intervals on the assumption that this simple model is correct. However, despite an apparent excellent fit of the data to such a model, we note here that the resulting confidence intervals may well be misleading in that they can fail to provide an adequate coverage probability. We illustrate this with a simulation for a hypothetical population based on data reported in the literature from three sources. The simulated nominal 95 per cent confidence intervals contained the modelled population size only 30 per cent of the time. Only if external considerations justify the assumption of plausible interactions of sources would use of the simpler model's interval be justified.

59 citations


Journal ArticleDOI
TL;DR: The conjugate duality theory as discussed by the authors can overcome the limitations of discretization within the'strict bounds' formalism, a technique for constructing confidence intervals for functionals of the unknown model incorporating certain types of prior information.
Abstract: Many techniques for solving inverse problems involve approximating the unknown model, a function, by a finite-dimensional 'discretization' or parametric representation. The uncertainty in the computed solution is sometimes taken to be the uncertainty within the parametrization; this can result in unwarranted confidence. The theory of conjugate duality can overcome the limitations of discretization within the 'strict bounds' formalism, a technique for constructing confidence intervals for functionals of the unknown model incorporating certain types of prior information. The usual computational approach to strict bounds approximates the 'primal' problem in a way that the resulting confidence intervals are at most long enough to have the nominal coverage probability. There is another approach based on 'dual' optimization problems that gives confidence intervals with at least the nominal coverage probability. The pair of intervals derived by the two approaches bracket a correct confidence interval. The theory is illustrated with gravimetric, seismic, geomagnetic, and helioseismic problems and a numerical example in seismology.

44 citations


Journal ArticleDOI
TL;DR: The most commonly used method of setting confidence intervals for the correlation coefficient is based on the normal approximation to the Fisher Z -transformation as discussed by the authors, which has been shown to be conservative and numerically confirmed to be tight in the sense that the actual coverage probability is close to a preset value.

25 citations


Journal ArticleDOI
TL;DR: In this article, confidence intervals for an arbitrary population quantile based on interpolating adjacent order statistics are presented, and the obtained interval is shown to have approximately the required coverage probability over continuous distributions.

22 citations


Journal ArticleDOI
TL;DR: It is shown that the bootstrap confidence interval is the one to use in age replacement problems and comparisons are made with the confidence interval obtained from asymptotic normal theory.
Abstract: Bootstrap confidence intervals for the actual cost of using a given nonparametric estimate of the optimal age replacement strategy are shown to have the claimed coverage probability. A numerical algorithm is given to obtain these confidence intervals in practice. The small sample behavior of these confidence intervals is illustrated by simulations. Finally, comparisons are made with the confidence interval obtained from asymptotic normal theory. We show that the bootstrap confidence interval is the one to use in age replacement problems.

13 citations


Book ChapterDOI
01 Jan 1992
TL;DR: The purpose in this chapter is to bring together salient parts of the technology described in Chapters 1 and 2, with the aim of explaining properties of bootstrap methods for estimating distributions and constructing confidence intervals.
Abstract: Our purpose in this chapter is to bring together salient parts of the technology described in Chapters 1 and 2, with the aim of explaining properties of bootstrap methods for estimating distributions and constructing confidence intervals. We shall emphasize the role played by pivotal methods, introduced in Section 1.3 (Example 1.2).

10 citations


Book ChapterDOI
TL;DR: In this article, the Wild Bootstrap method is used to construct a fine grid of error bars with simultaneous coverage probability, which are then joined via polygon pieces or parabolae using assumptions on the local curvature of the regression curve.
Abstract: Bootstrap confidence bands are constructed for nonparametric regression. Resampling is based on a suitably estimated residual distribution. The procedure is called the Wild Bootstrap. The method is to construct first a fine grid of error bars with simultaneous coverage probability. Second the end-points of these error bars are joined via polygon pieces or parabolae using assumptions on the local curvature of the regression curve.

9 citations


Book ChapterDOI
01 Jan 1992
TL;DR: In this paper, the problem of constructing a good confidence set C n for a given value τ = T(θ), where T is a given function, has been studied.
Abstract: Suppose that the sample x n has distribution P θ,n, where θ∈Θ is unknown and may be either finite or infinite dimensional. Of interest is the value τ = T(θ), where T is a given function. This paper treats the problem of constructing a good confidence set C n for τ.

Book ChapterDOI
01 Jan 1992
TL;DR: In this article, the design and bootstrap construction of asymptotically optimal prediction regions is discussed, where the emphasis is on devising simultaneous one-sided prediction intervals.
Abstract: This article discusses the design and bootstrap construction of asymptotically optimal prediction regions. The emphasis is on devising simultaneous one-sided prediction intervals. A good solution to this problem implies constructions for simultaneous two-sided prediction intervals and for multivariate prediction regions.

Journal ArticleDOI
TL;DR: In this paper, a lower bound for the number of additional observations after stopping is derived, which ensures the "mxact" probability of coverage, and two-stage, three-stage and modified sequential procedures are proposed for the same estimation problem.
Abstract: The sequential procedure developed by Bhargava and Srivastava (1973, J. Roy. Statist. Soc. Ser. B, 35, 147–152) to construct fixed-width confidence intervals for contrasts in the means is further analyzed. Second-order approximations for the first two moments of the stopping time and the coverage probability associated with the sequential procedure, are obtained. A lower bound for the number of “additional” observations after stopping is derived, which ensures the “mxact” probability of coverage. Moreover, two-stage, three-stage and “modified” sequential procedures are proposed for the same estimation problem. Relative advantages and disadvantages of these sampling schemes are discussed and their properties are studied.

Journal Article
TL;DR: The central limit theorem can be applied to a Monte Carlo solution if two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large.
Abstract: The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such as statistical error reduction proportional to 1/[radical]N with error magnitude guidelines Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf)

Journal ArticleDOI
TL;DR: Two new methods for constructing simultaneous prediction regions that simultaneously assert a collection of prediction regions, one prediction region for each future observable of interest, show the same asymptotic performance.
Abstract: Two new methods for constructing simultaneous prediction regions are the subject of this article. Both methods simultaneously assert a collection of prediction regions, one prediction region for each future observable of interest. Both methods have the same aims: to control the overall coverage probability of the simultaneous prediction region and to keep equal the coverage probabilities of the individual prediction statements that make up the simultaneous region. The latter property is called balance.

Journal ArticleDOI
TL;DR: In this paper, it was shown that any finite diameter confidence set has zero minimum coverage probability, no matter how large the sample size is as long as it is fixed, and that the answer is negative for K -stage, for any finite positive interger K, sequential sampling.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the conditional coverage probability of a variable X and the learning sample Y that was observed are independent, with a joint distribution that depends on an unknown parameter, and that the coverage probability converges in probability to α at a rate which usually cannot exceed Ω(n − 1/2).
Abstract: Suppose the variable $X$ to be predicted and the learning sample $Y_n$ that was observed are independent, with a joint distribution that depends on an unknown parameter $\theta$. A prediction region $D_n$ for $X$ is a random set, depending on $Y_n$, that contains $X$ with prescribed probability $\alpha$. In sufficiently regular models, $D_n$ can be constructed so that overall coverage probability converges to $\alpha$ at rate $n^{-r}$, where $r$ is any positive integer. This paper shows that the conditional coverage probability of $D_n$, given $Y_n$, converges in probability to $\alpha$ at a rate which usually cannot exceed $n^{-1/2}$.

Journal ArticleDOI
Zoltan Papp1
TL;DR: Using the results of D.B. Owen (1965) on bivariate non-central t - variables, the authors obtained the exact value of the probability Pr[E(q)] under conditions that are satisfied in practical applications.
Abstract: Let (X1,...,n.) be a random sample from a normal population N(μ, σ2), also let and . The random variable is the coverage of the tolerance interval . (Φ(x) is the CDF of the standard normal variable.) Let E(q) denote the event Using the results of D.B. Owen (1965) on bivariate non-central t - variables, we obtain the exact value of the probability Pr[E(q)] under conditions that are satisfied in practical applications. Numerical algorithm is also suggested to calculate Pr[E(q)]. Applications include finding two-sided tolerance intervals for normal populations and calculating lower confidence bounds for the fraction of products conforming to two-sided specifications.

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate that the BCa confidence intervals can degenerate with a very high probability if a ⩾ 0.4-approximation is required.

Journal ArticleDOI
TL;DR: In this article, a sequential confidence interval of fixed width 2d d > 0 was constructed for the correlation coefficient of a bivariate normal distribution, and it was shown that the coverage probability is approximately equal to a preassigned number γ, 0 < γ < as d → 0.
Abstract: A sequential confidence interval of fixed width 2d d > 0, is constructed for the correlation coefficient of a bivariate normal distribution. It is shown that the coverage probability is approximately equal to a preassigned number γ, 0 < γ < as d → 0.

Journal ArticleDOI
TL;DR: In this article, a stopping time for fixed-width confidence interval estimation of a real-valued parameter θ(F) is considered and sufficient conditions for asymptotic efficiency and normality of the stopping time are given.
Abstract: Analogous to the usual stopping time for fixed-width confidence interval estimation of a real-valued parameter θ(F), a stopping time for fixed-width confidence band estimation of, a statistical functional process RF is considered. Techniques for showing that the resulting sequential confidence region has the correct asymptotic coverage probability are illustrated in several examples. Sufficient conditions for asymptotic efficiency and normality of the stopping time are given. Some implications of using the stopping time in conjunction with a bootstrap confidence band procedure are investigated in the context of estimating a dstribution function.

Journal ArticleDOI
TL;DR: In this article, a new procedure for construction of simultaneous confidence bands for a distribution function is presented along with a Monte Carlo simulation study of the robustness of this procedure, based upon the confidence regions proposed by Littell and Rao (Technometrics, 1978).
Abstract: A new procedure for construction of simultaneous confidence bands for a distribution function is presented along with a Monte Carlo simulation study of the robustness of this procedure. The procedure is based upon the confidence regions proposed by Littell and Rao (Technometrics, 1978). The confidence band is shown to be quite robust against incorrect specification of the underlying parametric family assumed in the construction of the band. The band width is less than widths for nonparametric approaches and competitive with non-robust parametric bands in many cases. No new tables are required to construct the band.

Book ChapterDOI
01 Jan 1992
TL;DR: In this article, the authors present a new family of optimal design criteria for parameter estimation in nonlinear regression, based on minimization of expected volumes of, at least second-order correct, bootstrap confidence regions.
Abstract: This paper presents a new family of optimal design criteria for parameter estimation in nonlinear regression, based on minimization of expected volumes of, at least second-order correct, bootstrap confidence regions.

Journal ArticleDOI
TL;DR: In this article, the authors generalize a theorem of Brown and Cohen (1974) regarding interval estimation of the common mean to that of confidence region for two p - variate homoscedastic normal populations.
Abstract: Consider the problem of estimating the common mean vector of two independent p - variate normal distributions with dispersion matrices and . For p=l the results regarding point estimation of the common mean have been obtained by many authors, Graybill and Deal (1959), Brown and Cohen (1974) , Bhattacharya (1981), recently Kubokawa (1987). A multivariate problem related to Linear Models has been considered by Khatri and Shah (1974). Norwood and Hinkelmann (1977) have proved result regarding common mean of several normal populations. Brown and Cohen (1974) and Khatri and Shah (1981) have obtained an improved confidence interval for the common mean of univariate normal distribution. In this article we generalize a theorem of Brown and Cohen (1974) regarding interval estimation of the common mean to that of confidence region for the common mean vector of two p - variate homoscedastic normal populations. It is shown that the proposed confidence region has greater coverage probability than the usual confidence ...

Book ChapterDOI
01 Jan 1992
TL;DR: Using the asymptotic theory and results of a Monte Carlo study, the authors provided some guidelines to draw inferences about the parameters of the MSAE regression model, and used this information to improve the performance of the model.
Abstract: Using the asymptotic theory and results of a Monte Carlo study, we provide some guidelines to draw inferences about the parameters of the MSAE regression model.