scispace - formally typeset
Search or ask a question

Showing papers on "Coverage probability published in 2005"


Journal ArticleDOI
TL;DR: In this article, the authors proposed the false coverage-statement rate (FCR) as a measure of interval coverage following selection, and proposed a general procedure to construct a marginal CI for each selected parameter, but instead of the confidence level 1 − q being used marginally, q is divided by the number of parameters considered and multiplied by the selected.
Abstract: Often in applied research, confidence intervals (CIs) are constructed or reported only for parameters selected after viewing the data. We show that such selected intervals fail to provide the assumed coverage probability. By generalizing the false discovery rate (FDR) approach from multiple testing to selected multiple CIs, we suggest the false coverage-statement rate (FCR) as a measure of interval coverage following selection. A general procedure is then introduced, offering FCR control at level q under any selection rule. The procedure constructs a marginal CI for each selected parameter, but instead of the confidence level 1 − q being used marginally, q is divided by the number of parameters considered and multiplied by the number selected. If we further use the FDR controlling testing procedure of Benjamini and Hochberg for selecting the parameters, the newly suggested procedure offers CIs that are dual to the testing procedure and are shown to be optimal in the independent case. Under the positive re...

591 citations


Journal ArticleDOI
TL;DR: It is argued that a robust version of Cohen's effect size constructed by replacing population means with 20% trimmed means and the population standard deviation with the square root of a 20% Winsorized variance is a better measure of population separation than is Cohen’s effect size.
Abstract: The authors argue that a robust version of Cohen's effect size constructed by replacing population means with 20% trimmed means and the population standard deviation with the square root of a 20% Winsorized variance is a better measure of population separation than is Cohen's effect size. The authors investigated coverage probability for confidence intervals for the new effect size measure. The confidence intervals were constructed by using the noncentral t distribution and the percentile bootstrap. Over the range of distributions and effect sizes investigated in the study, coverage probability was better for the percentile bootstrap confidence interval.

213 citations


Journal ArticleDOI
TL;DR: Applying these techniques to the biomarker thiobarbituric acid reaction substance (TBARS), a measure of sub‐products of lipid peroxidation that has been proposed as a discriminating measurement for cardiovascular disease, yields a 50% increase in diagnostic effectiveness at the optimal cut‐point.
Abstract: Random measurement error can attenuate a biomarker's ability to discriminate between diseased and non-diseased populations. A global measure of biomarker effectiveness is the Youden index, the maximum difference between sensitivity, the probability of correctly classifying diseased individuals, and 1-specificity, the probability of incorrectly classifying health individuals. We present an approach for estimating the Youden index and associated optimal cut-point for a normally distributed biomarker that corrects for normally distributed random measurement error. We also provide confidence intervals for these corrected estimates using the delta method and coverage probability through simulation over a variety of situations. Applying these techniques to the biomarker thiobarbituric acid reaction substance (TBARS), a measure of sub-products of lipid peroxidation that has been proposed as a discriminating measurement for cardiovascular disease, yields a 50% increase in diagnostic effectiveness at the optimal cut-point. This result may lead to biomarkers that were once naively considered ineffective becoming useful diagnostic devices.

197 citations


Journal ArticleDOI
TL;DR: In this article, a simple pivotal-based approach that produces prediction intervals and predictive distributions with well-calibrated frequentist probability interpretations is introduced, and efficient simulation methods for producing predictive distributions are considered.
Abstract: SUMMARY We consider parametric frameworks for the prediction of future values of a random variable Y, based on previously observed data X. Simple pivotal methods for obtaining calibrated prediction intervals are presented and illustrated. Frequentist predictive distri butions are defined as confidence distributions, and their utility is demonstrated. A simple pivotal-based approach that produces prediction intervals and predictive distributions with well-calibrated frequentist probability interpretations is introduced, and efficient simulation methods for producing predictive distributions are considered. Properties related to an average Kullback-Leibler measure of goodness for predictive or estimated distributions are given. The predictive distributions here are shown to be optimal in certain settings with invariance structure, and to dominate plug-in distributions under certain conditions.

197 citations


Journal ArticleDOI
TL;DR: One-sided confidence intervals in the binomial, negative binomial and Poisson distributions are considered in this article, and it is shown that the standard Wald interval suffers from systematic bias in the coverage and so does the one-sided score interval.

141 citations


Journal ArticleDOI
TL;DR: ASAP3 is a sequential procedure designed to produce a confidence-interval estimator that satisfies user-specified requirements on absolute or relative precision as well as coverage probability and compared favorably to other batch means procedures in an extensive experimental performance evaluation.
Abstract: We introduce ASAP3, a refinement of the batch means algorithms ASAP and ASAP2, that delivers point and confidence-interval estimators for the expected response of a steady-state simulation ASAP3 is a sequential procedure designed to produce a confidence-interval estimator that satisfies user-specified requirements on absolute or relative precision as well as coverage probability ASAP3 operates as follows: the batch size is progressively increased until the batch means pass the Shapiro-Wilk test for multivariate normality; and then ASAP3 fits a first-order autoregressive (AR(1)) time series model to the batch means If necessary, the batch size is further increased until the autoregressive parameter in the AR(1) model does not significantly exceed 08 Next, ASAP3 computes the terms of an inverse Cornish-Fisher expansion for the classical batch means t-ratio based on the AR(1) parameter estimates; and finally ASAP3 delivers a correlation-adjusted confidence interval based on this expansion Regarding not only conformance to the precision and coverage-probability requirements but also the mean and variance of the half-length of the delivered confidence interval, ASAP3 compared favorably to other batch means procedures (namely, ABATCH, ASAP, ASAP2, and LBATCH) in an extensive experimental performance evaluation

92 citations


Journal ArticleDOI
TL;DR: This work investigates the small-sample performance of the robust score test for correlated data and proposes several modifications to improve the performance, including a modification based on a simple adjustment to the usual robust score statistic by a factor of J/(J - 1) (where J is the number of clusters).
Abstract: The sandwich variance estimator of generalized estimating equations (GEE) may not perform well when the number of independent clusters is small. This could jeopardize the validity of the robust Wald test by causing inflated type I error and lower coverage probability of the corresponding confidence interval than the nominal level. Here, we investigate the small-sample performance of the robust score test for correlated data and propose several modifications to improve the performance. In a simulation study, we compare the robust score test to the robust Wald test for correlated Bernoulli and Poisson data, respectively. It is confirmed that the robust Wald test is too liberal whereas the robust score test is too conservative for small samples. To explain this puzzling operating difference between the two tests, we consider their applications to two special cases, one-sample and two-sample comparisons, thus motivating some modifications to the robust score test. A modification based on a simple adjustment to the usual robust score statistic by a factor of J/(J - 1) (where J is the number of clusters) reduces the conservativeness of the generalized score test. Simulation studies mimicking group-randomized clinical trials with binary and count responses indicated that it may improve the small-sample performance over that of the generalized score and Wald tests with test size closer to the nominal level. Finally, we demonstrate the utility of our proposal by applying it to a group-randomized clinical trial, trying alternative cafeteria options in schools (TACOS).

61 citations


Journal ArticleDOI
TL;DR: Double bootstrap confidence intervals can be estimated using computational algorithms incorporating simple deterministic stopping rules that avoid unnecessary computations and efficiency gains are examined by means of a Monte Carlo study for examples of confidence intervals for a mean and for the cumulative impulse response in a second order autoregressive model.

49 citations


Journal ArticleDOI
TL;DR: To identify the method best suited for small proportions, seven approximate methods and the Clopper–Pearson Exact method to calculate CIs were compared.
Abstract: Purpose It is generally agreed that a confidence interval (CI) is usually more informative than a point estimate or p-value, but we rarely encounter small proportions with CI in the pharmacoepidemiological literature. When a CI is given it is sporadically reported, how it was calculated. This incorrectly suggests one single method to calculate CIs. To identify the method best suited for small proportions, seven approximate methods and the Clopper-Pearson Exact method to calculate CIs were compared. Methods In a simulation study for 90-, 95- and 99%CIs, with sample size 1000 and proportions ranging from 0.001 to 0.01, were evaluated systematically. Main quality criteria were coverage and interval width. The methods are illustrated using data from pharmacoepidemiology studies. Results Simulations showed that standard Wald methods have insufficient coverage probability regardless of how the desired coverage is perceived. Overall, the Exact method and the Score method with continuity correction (CC) performed best. Real life examples showed the methods to yield different results too. Conclusions For CIs for small proportions (pi

47 citations


Journal ArticleDOI
TL;DR: In this paper, the authors evaluate several asymptotic interval estimation methods for problems in which groups are of different sizes, and propose a method based on the score statistic with a correction for skewness and a method in which the logit function is applied to the MLE.
Abstract: Group testing, in which units are pooled together and tested as a group for the presence of an attribute, has been used in many fields of study, including blood testing, plant disease assessment, fisheries, and vector transmission of viruses. When groups are of unequal size, complications arise in the derivation of confidence intervals for the proportion of units in the population with the attribute. We evaluate several asymptotic interval estimation methods for problems in which groups are of different size. Each method is examined for its theoretical properties, and adapted or developed for group testing. In an initial assessment using a study of virus prevalence in carnations, four methods are found to be satisfactory, and are considered further—two based on the distribution of the MLE, one on the score statistic, and one on the likelihood ratio. The performance of each method is then tested empirically on five realistic group testing procedures, with the evaluation focusing on the coverage probability provided by the confidence intervals. A method based on the score statistic with a correction for skewness is recommended, followed by a method in which the logit function is applied to the MLE.

44 citations


Journal ArticleDOI
TL;DR: For the most frequently used parametric and distribution-free methods of estimating univariate reference limits, implicit formulae are derived relating the sample size to the design parameters delta1, delta2 and beta and explicit approximationformulae for the computation of n are given.
Abstract: A new criterion is proposed for determining the sample size required for a study performed for the purpose of establishing reference intervals. The basic idea behind the criterion is to compare the empirical coverage (i.e. the probability content) of the reference region obtained from the sample with its target value (e.g. 95 per cent) and to set suitable limits delta1, delta2 to the difference between both quantities which must not be exceeded with sufficiently large probability beta (e.g. beta=90 per cent). For the most frequently used parametric and distribution-free methods of estimating univariate reference limits, implicit formulae are derived relating the sample size to the design parameters delta1, delta2 and beta. For symmetric specification of (delta1, delta2), explicit approximation formulae for the computation of n are given. Exact values obtained by means of suitable numerical techniques are presented in a set of tables covering specifications of delta1, delta2 and beta which can be recommended for real applications. The tables can be used both for one- and two-sided reference intervals.

Journal ArticleDOI
TL;DR: In this paper, a new generalized pivotal is proposed based on the best linear unbiased estimator of the common mean and the generalized inference, and an exact confidence interval is also derived.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the effects of smoothed bootstrap iterations on coverage probability of the smoothed-bootstrap and bootstrap-t confidence intervals for population quantiles, and established the optimal kernel bandwidths at various stages of smoothing procedures.
Abstract: This paper investigates the effects of smoothed bootstrap iterations on coverage probabilities of smoothed bootstrap and bootstrap-t confidence intervals for population quantiles, and establishes the optimal kernel bandwidths at various stages of the smoothing procedures The conventional smoothed bootstrap and bootstrap-t methods have been known to yield one-sided coverage errors of orders O(n −1/2 ) and o(n −2/3 ), respectively, for intervals based on the sample quantile of a random sample of size n We sharpen the latter result to O(n −5/6 ) with proper choices of bandwidths at the bootstrapping and Studentization steps We show further that calibration of the nominal coverage level by means of the iterated bootstrap succeeds in reducing the coverage error of the smoothed bootstrap percentile interval to the order O(n −2/3 ) and that of the smoothed bootstrap-t interval to O(n −58/57 ), provided that bandwidths are selected of appropriate orders Simulation results confirm our asymptotic findings, suggesting that the iterated smoothed bootstrap-t method yields the most accurate coverage On the other hand, the iterated smoothed bootstrap percentile method interval has the advantage of being shorter and more stable than the bootstrap-t intervals

Journal ArticleDOI
TL;DR: A local probability matching prior as discussed by the authors is a data-dependent approximation to a probabilistic priors, such that the asymptotic order of approximation of the frequentist coverage probability is not degraded.
Abstract: Probability matching priors are priors for which the posterior probabilities of certain specified sets are exactly or approximately equal to their coverage probabilities. These priors arise as solutions of partial differential equations that may be difficult to solve, either analytically or numerically. Recently Levine & Casella (2003) presented an algorithm for the implementation of probability matching priors for an interest parameter in the presence of a single nuisance parameter. In this paper we develop a local implementation that is very much more easily computed. A local probability matching prior is a data-dependent approximation to a probability matching prior and is such that the asymptotic order of approximation of the frequentist coverage probability is not degraded. We illustrate the theory with a number of examples, including three discussed in Levine & Casella (2003).

Journal ArticleDOI
TL;DR: Methods that yield a point and an interval estimation of the threshold that maximize the population utility whenever the test results are normally or log-normally distributed among healthy and among diseased subjects, with equal variances are presented.
Abstract: Putting a screening or a diagnostic test into everyday use requires the determination of its threshold. The authors present methods that yield a point and an interval estimation of the threshold that maximize the population utility whenever the test results are normally or log-normally distributed among healthy and among diseased subjects, with equal variances. These methods were assessed for bias, coverage probability, coverage symmetry, and confidence-interval width using simulation. They proved to be asymptotically nonbiased and to have a satisfactory coverage probability whenever the sample sizes of the healthy and the diseased subjects are equal to or greater than 50. The methods were next applied to determine an optimal threshold for the antibody load used to diagnose congenital toxoplasmosis at birth. The methods are easy to implement and impose few constraints; however, the sample sizes should be carefully determined according to the required accuracy.

Journal ArticleDOI
28 Feb 2005-Talanta
TL;DR: Four alternatively used methods computing modified expanded uncertainties are compared according to the levels of confidence, widths and layouts of the obtained uncertainty intervals, with a proposed new method which gives symmetric intervals just with the required level of confidence.

Journal ArticleDOI
TL;DR: A measure to assess measurement agreement for functional data which are frequently encountered in medical research and many other research fields is proposed and formulae to compute the standard error and confidence intervals for the proposed measure are derived.

Journal ArticleDOI
TL;DR: In this article, the authors present two spherical confidence sets for 0, both centred at a positive part Stein estimator T (X), and obtain the radius by approximating the upper a-point of the sampling disiribution of IITs+(X) - 9112 by the first two nonzero terms of its Taylor series about the origin.
Abstract: Summary. Suppose that X has a k-variate spherically symmetric distribution with mean vector 0 and identity covariance matrix. Wepresent two spherical confidence sets for 0, both centred at a positive part Stein estimator T (X). In the first, we obtain the radius by approximating the upper a-point of the sampling disiribution of IITs+(X) - 9112 by the first two non-zero terms of its Taylor series about the origin. We can analyse some of the properties of this confidence set and see that it performs well in terms of coverage probability, volume and conditional behaviour. In the second method, we find the radius by using a parametric bootstrap procedure. Here, even greater improvement in terms of volume over the usual confidence set is possible, at the expense of having a less explicit radius function. A real data example is provided, and extensions to the unknown covariance matrix and elliptically symmetric cases are discussed.

Journal ArticleDOI
TL;DR: In this article, the authors apply an empirical likelihood ratio (ELR) method to the regression model and derive the limiting distribution of the ELR, on the basis of which they develop a confidence region for the vector of regression parameters.
Abstract: In recent years, regression models have been shown to be useful for predicting the long-term survival probabilities of patients in clinical trials. For inference on the vector of regression parameters, there are semiparametric procedures based on normal approximations. However, the accuracy of such procedures in terms of coverage probability can be quite low when the censoring rate is heavy. In this paper, we apply an empirical likelihood ratio (ELR) method to the regression model and derive the limiting distribution of the ELR. On the basis of the result, we develop a confidence region for the vector of regression parameters. Furthermore, we use a simulation study to compare the proposed method with the normal approximation-based method proposed by Jung [Jung, S., 1996, Regression analysis for long-term survival rate. Biometrika, 83, 227–232.]. Finally, the proposed procedure is illustrated with data from a clinical trial.

Journal ArticleDOI
Paul Kabaila1
TL;DR: A new Monte Carlo simulation estimator of the coverage probability, which uses conditioning for variance reduction, is derived and presented, which provides an upper bound on the minimum coverage probability of the naive confidence interval.
Abstract: Summary This paper considers a linear regression model with regression parameter vector β. The parameter of interest is θ= aTβ where a is specified. When, as a first step, a data-based variable selection (e.g. minimum Akaike information criterion) is used to select a model, it is common statistical practice to then carry out inference about θ, using the same data, based on the (false) assumption that the selected model had been provided a priori. The paper considers a confidence interval for θ with nominal coverage 1 - α constructed on this (false) assumption, and calls this the naive 1 - α confidence interval. The minimum coverage probability of this confidence interval can be calculated for simple variable selection procedures involving only a single variable. However, the kinds of variable selection procedures used in practice are typically much more complicated. For the real-life data presented in this paper, there are 20 variables each of which is to be either included or not, leading to 220 different models. The coverage probability at any given value of the parameters provides an upper bound on the minimum coverage probability of the naive confidence interval. This paper derives a new Monte Carlo simulation estimator of the coverage probability, which uses conditioning for variance reduction. For these real-life data, the gain in efficiency of this Monte Carlo simulation due to conditioning ranged from 2 to 6. The paper also presents a simple one-dimensional search strategy for parameter values at which the coverage probability is relatively small. For these real-life data, this search leads to parameter values for which the coverage probability of the naive 0.95 confidence interval is 0.79 for variable selection using the Akaike information criterion and 0.70 for variable selection using Bayes information criterion, showing that these confidence intervals are completely inadequate.

Journal ArticleDOI
TL;DR: In this article, lower bounds for probabilistic error subject to a mean squared error constraint are given for the expected length of variable length confidence intervals centred on adaptive estimators.
Abstract: Lower bounds are given for probabilistic error subject to a mean squared error constraint. Consequences for the expected length of variable length confidence intervals centred on adaptive estimators are given. It is shown that in many contexts centring confidence intervals on adaptive estimators must lead either to poor coverage probability or unnecessarily long intervals.

Journal ArticleDOI
TL;DR: In this paper, the authors compare a number of equal-tailed confidence intervals for the binomial distribution and show that methods that produce superior intervals, as measured by coverage and length, need not perform well in terms of p-confidence and p-bias.
Abstract: Confidence intervals for discrete distributions are often evaluated only by coverage and expected length. We discuss two additional criteria, p-confidence and p-bias. The choice of these criteria is motivated by the interpretation of a confidence interval as being the set of parameter values not rejected by a hypothesis test. Using these additional criteria we compare a number of equal-tailed confidence intervals for the binomial distribution. It is shown that methods that produce superior intervals, as measured by coverage and length, need not perform well in terms of p-confidence and p-bias. Cox's measuring device example is discussed to motivate the need for criteria beyond coverage and length.

Journal ArticleDOI
TL;DR: This article considers exact and approximate unconditional confidence intervals for rate difference via inverting a score test and shows that the approximate unconditional score confidence interval estimators based on inverting the score test demonstrate reasonably good coverage properties even in small-sample designs, and yet are relatively easy to implement computationally.
Abstract: Paired dichotomous data may arise in clinical trials such as pre-/post-test comparison studies and equivalence trials Reporting parameter estimates (eg odds ratio, rate difference and rate ratio) along with their associated confidence interval estimates becomes a necessity in many medical journals Various asymptotic confidence interval estimators have long been developed for differences in correlated binary proportions Nevertheless, the performance of these asymptotic methods may have poor coverage properties in small samples In this article, we investigate several alternative confidence interval estimators for the difference between binomial proportions based on small-sample paired data Specifically, we consider exact and approximate unconditional confidence intervals for rate difference via inverting a score test The exact unconditional confidence interval guarantees the coverage probability, and it is recommended if strict control of coverage probability is required However, the exact method tends to be overly conservative and computationally demanding Our empirical results show that the approximate unconditional score confidence interval estimators based on inverting the score test demonstrate reasonably good coverage properties even in small-sample designs, and yet they are relatively easy to implement computationally We illustrate the methods using real examples from a pain management study and a cancer study

Journal ArticleDOI
TL;DR: In this article, a new method is proposed for constructing confidence intervals on the response variance in the unbalanced case of the one-way variance component model via generalized inference, which can be derived by the fiducial method directly and easily.
Abstract: In this article, a new method is proposed for constructing confidence intervals on the response variance in the unbalanced case of the one-way variance component model via generalized inference. It is shown that the generalized pivotal quantity in the method can be derived by the fiducial method directly and easily. To compare the resulted interval with the Modified Large Sample (MLS) interval by Burdick and Graybill (1984) and an approximate generalized confidence interval, a simulation study is conducted. The results indicate that the proposed method performs better than the other two methods, especially for very unbalanced designs.

Journal ArticleDOI
TL;DR: It is demonstrated, through computer simulations, that the resulting asymptotic Wald confidence intervals cannot be trusted to achieve the desired confidence levels and should be cautious in using the usual linearized standard errors of MLE and the associated confidence intervals.
Abstract: Regression models are routinely used in many applied sciences for describing the relationship between a response variable and an independent variable. Statistical inferences on the regression parameters are often performed using the maximum likelihood estimators (MLE). In the case of nonlinear models the standard errors of MLE are often obtained by linearizing the nonlinear function around the true parameter and by appealing to large sample theory. In this article we demonstrate, through computer simulations, that the resulting asymptotic Wald confidence intervals cannot be trusted to achieve the desired confidence levels. Sometimes they could underestimate the true nominal level and are thus liberal. Hence one needs to be cautious in using the usual linearized standard errors of MLE and the associated confidence intervals.

Journal ArticleDOI
TL;DR: In this article, an Edgeworth expansion for the studentized difference between two binomial proportions of paired data was derived and a transformation based confidence interval for the difference was derived.

Journal ArticleDOI
TL;DR: In this paper, the authors construct explicit minimax expected length confidence sets for a variety of one-dimensional statistical models, including the bounded normal mean with known and with unknown variance.
Abstract: We study confidence sets for a parameter θ∈Θ that have minimax expected measure among random sets with at least 1-α coverage probability. We characterize the minimax sets using duality, which helps to find confidence sets with small expected measure and to bound improvements in expected measure compared with standard confidence sets. We construct explicit minimax expected length confidence sets for a variety of one-dimensional statistical models, including the bounded normal mean with known and with unknown variance. For the bounded normal mean with unit variance, the minimax expected measure 95% confidence interval has a simple form for Θ= [-τ, τ] with τ≤3.25. For Θ= [-3, 3], the maximum expected length of the minimax interval is about 14% less than that of the minimax fixed-length affine confidence interval and about 16% less than that of the truncated conventional interval [X -1.96, X + 1.96] ∩[-3,3].

Journal ArticleDOI
TL;DR: In this paper, a partially linear single-index model is investigated, and three empirical log-likelihood ratio statistics for the unknown parameters in the model are suggested, and it is proved that the proposed statistics are asymptotically standard chi-square under some suitable conditions.
Abstract: In this paper, a partially linear single-index model is investigated, and three empirical log-likelihood ratio statistics for the unknown parameters in the model are suggested It is proved that the proposed statistics are asymptotically standard chi-square under some suitable conditions, and hence can be used to construct the confidence regions of the parameters Our methods can also deal with the confidence region construction for the index in the pure single-index model A simulation study indicates that, in terms of coverage probabilities and average areas of the confidence regions, the proposed methods perform better than the least-squares method

Journal ArticleDOI
01 Jan 2005
TL;DR: In this article, an interval estimator is developed using weighted Polya posterior, which is essentially the Agresti-Coull confidence interval with some improved features, and it is shown that the weighted Polys posterior produce an effective interval estimation for small sample size and a severely skewed binomial distribution.
Abstract: Recently the interval estimation of a binomial proportion is revisited in various literatures. This is mainly due to the erratic behavior of the coverage probability of the will-known Wald confidence interval. Various alternatives have been proposed. Among them, Agresti-Coull confidence interval has been recommended by Brown et al. (2001) with other confidence intervals for large sample, say n 40. On the other hand, a noninformative Bayesian approach called Polya posterior often produces statistics with good frequentist`s properties. In this note, an interval estimator is developed using weighted Polya posterior. The resulting interval estimator is essentially the Agresti-Coull confidence interval with some improved features. It is shown that the weighted Polys posterior produce an effective interval estimator for small sample size and a severely skewed binomial distribution.

Book ChapterDOI
22 Nov 2005
TL;DR: This paper derives two different necessary and sufficient conditions respectively in the situations that the density function achieves its minimum value on a set with positive Lebesgue measure or at finitely many points.
Abstract: In this paper we study the limiting achievable coverage problems of sensor networks. For the sensor networks with uniform distributions we obtain a complete characterization of the coverage probability. For the sensor networks with non-uniform distributions, we derive two different necessary and sufficient conditions respectively in the situations that the density function achieves its minimum value on a set with positive Lebesgue measure or at finitely many points. We propose also an economical scheme for the coverage of sensor networks with empirical distributions.