scispace - formally typeset
Search or ask a question

Showing papers on "Coverage probability published in 2015"


Journal ArticleDOI
TL;DR: It is shown that the known issues of underestimation of the statistical error and spuriously overconfident estimates with the RE model can be resolved by the use of an estimator under the fixed effect model assumption with a quasi-likelihood based variance structure - the IVhet model.

386 citations


Journal ArticleDOI
TL;DR: The β-GPP is introduced and promoted, which is an intermediate class between the PPP and the GPP, as a model for wireless networks when the nodes exhibit repulsion and it is found that the fitted β- GPP can closely model the deployment of actual base stations in terms of coverage probability and other statistics.
Abstract: The spatial structure of transmitters in wireless networks plays a key role in evaluating mutual interference and, hence, performance. Although the Poisson point process (PPP) has been widely used to model the spatial configuration of wireless networks, it is not suitable for networks with repulsion. The Ginibre point process (GPP) is one of the main examples of determinantal point processes that can be used to model random phenomena where repulsion is observed. Considering the accuracy, tractability, and practicability tradeoffs, we introduce and promote the $\beta$ -GPP, which is an intermediate class between the PPP and the GPP, as a model for wireless networks when the nodes exhibit repulsion. To show that the model leads to analytically tractable results in several cases of interest, we derive the mean and variance of the interference using two different approaches: the Palm measure approach and the reduced second-moment approach, and then provide approximations of the interference distribution by three known probability density functions. In addition, to show that the model is relevant for cellular systems, we derive the coverage probability of a typical user and find that the fitted $\beta$ -GPP can closely model the deployment of actual base stations in terms of coverage probability and other statistics.

255 citations


Journal ArticleDOI
TL;DR: This article examines the performance of the updated quality effects (QE) estimator for meta-analysis of heterogeneous studies and shows that this approach leads to a decreased mean squared error (MSE) of the estimator while maintaining the nominal level of coverage probability of the confidence interval.

160 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derived the limiting distributions of least squares averaging estimators for linear regression models in a local asymptotic framework, and then developed a plug-in averaging estimator that minimizes the sample analog of the mean squared error.

84 citations


Journal ArticleDOI
TL;DR: The proposed method aims at quantifying the uncertainty in the prediction arising from both the input data and the prediction model, and shows that the crisp approach is less reliable than the interval-valued input approach in terms of capturing the variability in input.
Abstract: We consider the task of performing prediction with neural networks (NNs) on the basis of uncertain input data expressed in the form of intervals. We aim at quantifying the uncertainty in the prediction arising from both the input data and the prediction model. A multilayer perceptron NN is trained to map interval-valued input data onto interval outputs, representing the prediction intervals (PIs) of the real target values. The NN training is performed by nondominated sorting genetic algorithm-II, so that the PIs are optimized both in terms of accuracy (coverage probability) and dimension (width). Demonstration of the proposed method is given in two case studies: 1) a synthetic case study, in which the data have been generated with a 5-min time frequency from an autoregressive moving average model with either Gaussian or Chi-squared innovation distribution and 2) a real case study, in which experimental data consist of wind speed measurements with a time step of 1 h. Comparisons are given with a crisp (single-valued) approach. The results show that the crisp approach is less reliable than the interval-valued input approach in terms of capturing the variability in input.

69 citations


Journal ArticleDOI
TL;DR: It is shown that application of the proposed method improves the quality of constructed PIs by more than 28% over the existing technique, leading to narrower PIs with a coverage probability greater than the nominal confidence level.
Abstract: This brief proposes an efficient technique for the construction of optimized prediction intervals (PIs) by using the bootstrap technique. The method employs an innovative PI-based cost function in the training of neural networks (NNs) used for estimation of the target variance in the bootstrap method. An optimization algorithm is developed for minimization of the cost function and adjustment of NN parameters. The performance of the optimized bootstrap method is examined for seven synthetic and real-world case studies. It is shown that application of the proposed method improves the quality of constructed PIs by more than 28% over the existing technique, leading to narrower PIs with a coverage probability greater than the nominal confidence level.

59 citations


Journal ArticleDOI
TL;DR: Numerical results confirm the effectiveness of the proposed system-level optimization to enhance the coverage probability in the interference-limited regime, and a system- level and interference-aware optimization criterion of the bias coefficients is proposed.
Abstract: In this paper, a tractable mathematical framework for the analysis and optimization of two-hop relay-aided cellular networks is introduced. The proposed approach leverages stochastic geometry for system-level analysis, by modeling the locations of base stations, relay nodes and mobile terminals as points of homogeneous Poisson point processes. A flexible cell association and relay-aided transmission protocol based on the best biased average received power are considered. Computationally tractable integrals and closed-form expressions for coverage and rate are provided, and the performance trends of relay-aided cellular networks are identified. It is shown that coverage and rate highly depend on the path-loss exponents of one- and two-hop links. In the interference-limited regime, in particular, it is shown that, if the system is not adequately designed, the presence of relay nodes may provide negligible performance gains. By capitalizing on the proposed mathematical framework, a system-level and interference-aware optimization criterion of the bias coefficients is proposed. Numerical results confirm the effectiveness of the proposed system-level optimization to enhance the coverage probability in the interference-limited regime. The presence of relays, on the other hand, is shown to have a limited impact on average/coverage rate under the same assumptions.

44 citations


Journal ArticleDOI
TL;DR: An analytic expression for average coverage probability of cellular user and corresponding number of potential D2D users is derived using mature framework of stochastic geometry and Poisson point process.
Abstract: Device-to-device (D2D) communication has huge potential for capacity and coverage enhancements for next generation cellular networks. The number of potential nodes for D2D communication is an important parameter that directly impacts the system capacity. In this letter, we derive an analytic expression for average coverage probability of cellular user and corresponding number of potential D2D users. In this context, mature framework of stochastic geometry and Poisson point process have been used. The retention probability has been incorporated in Laplace functional to capture reduced path-loss and shortest distance criterion based D2D pairing. The numerical results show a close match between analytic expression and simulation setup.

37 citations


Journal ArticleDOI
TL;DR: Considering that VLC is set to be one of the most challenging technologies for domestic applications in 5G networks, the analysis on coverage issues discussed herein is of particular interest in practical scenarios.
Abstract: Achieving both illumination and data reception with probability almost one inside a specific indoor environment is not an effortless task in visible light communication (VLC) networks. Several factors such as error probability, transmitted power, dimming factor, or node failure affect coverage probability to a large extent. To assure reliable signal reception, a dense transmitting network is required on the ceiling. In this paper, we investigate how the key factors contribute to a network deployment with a reliable degree of coverage at a specific horizontal plane. Considering that VLC is set to be one of the most challenging technologies for domestic applications in 5G networks, the analysis on coverage issues discussed herein is of particular interest in practical scenarios.

37 citations


Posted Content
TL;DR: This paper establishes the convergence rates of the minimax expected length for confidence intervals in the oracle setting where the sparsity parameter is given, and focuses on the problem of adaptation to sparsity for the construction of confidence intervals.
Abstract: Confidence sets play a fundamental role in statistical inference. In this paper, we consider confidence intervals for high dimensional linear regression with random design. We first establish the convergence rates of the minimax expected length for confidence intervals in the oracle setting where the sparsity parameter is given. The focus is then on the problem of adaptation to sparsity for the construction of confidence intervals. Ideally, an adaptive confidence interval should have its length automatically adjusted to the sparsity of the unknown regression vector, while maintaining a prespecified coverage probability. It is shown that such a goal is in general not attainable, except when the sparsity parameter is restricted to a small region over which the confidence intervals have the optimal length of the usual parametric rate. It is further demonstrated that the lack of adaptivity is not due to the conservativeness of the minimax framework, but is fundamentally caused by the difficulty of learning the bias accurately.

34 citations


Journal ArticleDOI
TL;DR: In this article, the authors compare several confidence intervals after model selection in the setting recently studied by Berk et al. and find that the actual coverage probabilities of all these intervals deviate only moderately from the desired nominal coverage probability.
Abstract: We compare several confidence intervals after model selection in the setting recently studied by Berk et al. (2013), where the goal is to cover not the true parameter but a certain non-standard quantity of interest that depends on the selected model. In particular, we compare the PoSI-intervals that are proposed in that reference with the `naive' confidence interval, which is constructed as if the selected model were correct and fixed a-priori (thus ignoring the presence of model selection). Overall, we find that the actual coverage probabilities of all these intervals deviate only moderately from the desired nominal coverage probability. This finding is in stark contrast to several papers in the existing literature, where the goal is to cover the true parameter.

Journal ArticleDOI
TL;DR: The lack of robustness of the sample versions of the multivariate coefficients of variation (MCV) is illustrated by means of influence functions and some robust counterparts based either on the Minimum Covariance Determinant (MCD) estimator or on the S estimator are advocated.

Journal ArticleDOI
TL;DR: In this paper, an easy and direct way to define and compute the fiducial distribution of a real parameter for both continuous and discrete exponential families is developed, which satisfies the requirements to be considered a confidence distribution.
Abstract: We develop an easy and direct way to define and compute the fiducial distribution of a real parameter for both continuous and discrete exponential families. Furthermore, such a distribution satisfies the requirements to be considered a confidence distribution. Many examples are provided for models, which, although very simple, are widely used in applications. A characterization of the families for which the fiducial distribution coincides with a Bayesian posterior is given, and the strict connection with Jeffreys prior is shown. Asymptotic expansions of fiducial distributions are obtained without any further assumptions, and again, the relationship with the objective Bayesian analysis is pointed out. Finally, using the Edgeworth expansions, we compare the coverage of the fiducial intervals with that of other common intervals, proving the good behaviour of the former.

Journal ArticleDOI
TL;DR: The EUDPE method provides a universal and effective means to carry out the lower bound analysis of both the coverage probability and the average throughput for various base-station distribution models that can be found in practice, including the stochastic Poisson point process (PPP) model.
Abstract: This paper proposes a novel tractable approach for accurately analyzing both the coverage probability and the achievable throughput of cellular networks. Specifically, we derive a new procedure referred to as the equivalent uniform-density plane-entity (EUDPE) method for evaluating the other-cell interference. Furthermore, we demonstrate that our EUDPE method provides a universal and effective means to carry out the lower bound analysis of both the coverage probability and the average throughput for various base-station distribution models that can be found in practice, including the stochastic Poisson point process (PPP) model, a uniformly and randomly distributed model, and a deterministic grid-based model. The lower bounds of coverage probability and average throughput calculated by our proposed method agree with the simulated coverage probability and average throughput results and those obtained by the existing PPP-based analysis, if not better. Moreover, based on our new definition of cell edge boundary, we show that the cellular topology with randomly distributed base stations (BSs) only tends toward the Voronoi tessellation when the path-loss exponent is sufficiently high, which reveals the limitation of this popular network topology.

Journal ArticleDOI
TL;DR: The relative risk and the odds ratio when data are collected from a matched-pairs design or a two-arm independent binomial experiment is focused on.
Abstract: For comparison of proportions, there are three commonly used measurements: the difference, the relative risk, and the odds ratio. Significant effort has been spent on exact confidence intervals for the difference. In this article, we focus on the relative risk and the odds ratio when data are collected from a matched-pairs design or a two-arm independent binomial experiment. Exact one-sided and two-sided confidence intervals are proposed for each configuration of two measurements and two types of data. The one-sided intervals are constructed using an inductive order, they are the smallest under the order, and are admissible under the set inclusion criterion. The two-sided intervals are the intersection of two one-sided intervals. R codes are developed to implement the intervals. Supplementary materials for this article are available online.

Journal ArticleDOI
TL;DR: This paper proves that an arbitrary probability density function can accurately be represented by a mixture of lognormal random variables (RVs) and provides outage and cellular coverage probability expressions, where it is shown that more accurate shadow fading models yield more realistic performance estimates.
Abstract: Modeling the variations in the local mean received power, the shadow fading is a relatively understudied effect in the literature. The inaccuracy of the universally accepted lognormal model is shown in many works. However, proposing other statistical distributions, such as gamma, which are not stemmed from the natural underlying physical process, cannot provide sufficient insights. Conceding the physical process of multiple reflections generating the lognormal distribution, in this paper, we propose a generalized mixture model that can address the modeling inaccuracies observed with a single lognormal distribution that may not correctly represent empirical data sets. To show that lognormal mixture model can be used under any shadow fading condition, we prove that an arbitrary probability density function can accurately be represented by a mixture of lognormal random variables (RVs). One of the main issues associated with mixture models is the determination of the mixture components. Here, we propose two solutions. Our first solution is a Dirichlet-process-mixture-based estimation technique that can provide the optimum number of components. Our second solution is an expectation–maximization (EM) algorithm-based technique for a more practical implementation. The proposed model and solution approaches are applied to our empirical data set, where the accuracy of the mixture model is verified by using both confidence-based and error-vector-norm-based techniques. Concluding this paper, we provide outage and cellular coverage probability expressions, where we show that more accurate shadow fading models yield more realistic performance estimates.

Journal ArticleDOI
TL;DR: The pseudolikelihood method is proposed, which can properly estimate the covariance between pooled estimates for different outcomes, which enables valid inference on functions of pooled estimates, and can be applied to meta-analysis where some studies have outcomes missing completely at random.
Abstract: Recently, multivariate random-effects meta-analysis models have received a great deal of attention, despite its greater complexity compared to univariate meta-analyses. One of its advantages is its ability to account for the within-study and between-study correlations. However, the standard inference procedures, such as the maximum likelihood or maximum restricted likelihood inference, require the within-study correlations, which are usually unavailable. In addition, the standard inference procedures suffer from the problem of singular estimated covariance matrix. In this paper, we propose a pseudolikelihood method to overcome the aforementioned problems. The pseudolikelihood method does not require within-study correlations and is not prone to singular covariance matrix problem. In addition, it can properly estimate the covariance between pooled estimates for different outcomes, which enables valid inference on functions of pooled estimates, and can be applied to meta-analysis where some studies have outcomes missing completely at random. Simulation studies show that the pseudolikelihood method provides unbiased estimates for functions of pooled estimates, well-estimated standard errors, and confidence intervals with good coverage probability. Furthermore, the pseudolikelihood method is found to maintain high relative efficiency compared to that of the standard inferences with known within-study correlations. We illustrate the proposed method through three meta-analyses for comparison of prostate cancer treatment, for the association between paraoxonase 1 activities and coronary heart disease, and for the association between homocysteine level and coronary heart disease. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

Journal ArticleDOI
TL;DR: This paper focuses on using the jackknife, the adjusted and the extended jackknife empirical likelihood methods to construct confidence intervals for the mean absolute deviation of a random variable and the results of simulation study show the comparison of the average length and coverage probability by using jackknife theoretical likelihood methods and normal approximation method.

Journal ArticleDOI
TL;DR: In this paper, a non-parametric variance estimator for the weighted kappa statistic is proposed without within-cluster correlation structure or distributional assumptions, and the results of an extensive Monte Carlo simulation study demonstrate that this estimator provides consistent estimation.

Journal ArticleDOI
TL;DR: In this paper, it is argued that by using the Bonferroni method, a band can often be obtained which is smaller than the Wald band, and the joint bootstrap distribution of the impulse response coefficient estimators is taken into account and mapped into the band.
Abstract: In impulse response analysis estimation uncertainty is typically displayed by constructing bands around estimated impulse response functions. If they are based on the joint asymptotic distribution possibly constructed with bootstrap methods in a frequentist framework, often individual confidence intervals are simply connected to obtain the bands. Such bands are known to be too narrow and have a joint coverage probability lower than the desired one. If instead the Wald statistic is used and the joint bootstrap distribution of the impulse response coefficient estimators is taken into account and mapped into the band, it is shown that such a band is typically rather conservative. It is argued that, by using the Bonferroni method, a band can often be obtained which is smaller than the Wald band.

Journal ArticleDOI
TL;DR: An extensive literature review on the estimation of the statistical cut point is conducted and the actual coverage probability for the lower confidence limit of a normal percentile using approximate normal method is much larger than the required confidence level with a small number of assays conducted in practice.
Abstract: The cut point of the immunogenicity screening assay is the level of response of the immunogenicity screening assay at or above which a sample is defined to be positive and below which it is defined to be negative. The Food and Drug Administration Guidance for Industry on Assay Development for Immunogenicity Testing of Therapeutic recommends the cut point to be an upper 95 percentile of the negative control patients. In this article, we assume that the assay data are a random sample from a normal distribution. The sample normal percentile is a point estimate with a variability that decreases with the increase of sample size. Therefore, the sample percentile does not assure at least 5% false-positive rate (FPR) with a high confidence level (e.g., 90%) when the sample size is not sufficiently enough. With this concern, we propose to use a lower confidence limit for a percentile as the cut point instead. We have conducted an extensive literature review on the estimation of the statistical cut point and compare several selected methods for the immunogenicity screening assay cut-point determination in terms of bias, the coverage probability, and FPR. The selected methods evaluated for the immunogenicity screening assay cut-point determination are sample normal percentile, the exact lower confidence limit of a normal percentile (Chakraborti and Li, 2007) and the approximate lower confidence limit of a normal percentile. It is shown that the actual coverage probability for the lower confidence limit of a normal percentile using approximate normal method is much larger than the required confidence level with a small number of assays conducted in practice. We recommend using the exact lower confidence limit of a normal percentile for cut-point determination.

Journal ArticleDOI
TL;DR: An exponential-series approximation, potentially converging to the exact value and with reduced computational complexity, is proposed for the desired signal power statistics under Rician fading and an analytical approach for evaluating the coverage probability is presented.
Abstract: This letter analyzes the coverage probability performance of heterogeneous cellular network (HCN) in a Rician/Rayleigh fading environment. First, an exponential-series approximation, potentially converging to the exact value and with reduced computational complexity, is proposed for the desired signal power statistics under Rician fading. Then, an analytical approach for evaluating the coverage probability of HCNs under Rician fading for desired signal and Rayleigh fading for interfering signals is presented and verified through simulation. Numerical results demonstrate considerable improvement in coverage probability when the desired signal is Rician faded, compared to that obtained when it is Rayleigh faded.

Journal ArticleDOI
TL;DR: It is found that probabilistic membership information derived from the bootstrap analysis can be used to improve the cluster assignment of individual objects, albeit only in the case of a very large number of clusters.
Abstract: Because of its deterministic nature, K-means does not yield confidence information about centroids and estimated cluster memberships, although this could be useful for inferential purposes. In this paper we propose to arrive at such information by means of a non-parametric bootstrap procedure, the performance of which is tested in an extensive simulation study. Results show that the coverage of hyper-ellipsoid bootstrap confidence regions for the centroids is in general close to the nominal coverage probability. For the cluster memberships, we found that probabilistic membership information derived from the bootstrap analysis can be used to improve the cluster assignment of individual objects, albeit only in the case of a very large number of clusters. However, in the case of smaller numbers of clusters, the probabilistic membership information still appeared to be useful as it indicates for which objects the cluster assignment resulting from the analysis of the original data is likely to be correct; hence, this information can be used to construct a partial clustering in which the latter objects only are assigned to clusters.

Journal ArticleDOI
TL;DR: A newly proposed PI estimation method called Lower Upper Bound Estimation (LUBE) method, which adopts an Artificial Neural Network with two outputs to directly generate the upper and lower bounds of PI without making any assumption about the data distribution is extended.
Abstract: It is widely accepted that Prediction Interval (PI) can provide more accurate and precise information than deterministic forecast when the uncertainty level increases in flood forecasting. Coverage probability and PI width are two main criteria used to assess the constructed PI, rarely has there been an index to quantify the symmetry between target value and PI. This study extends a newly proposed PI estimation method called Lower Upper Bound Estimation (LUBE) method, which adopts an Artificial Neural Network (ANN) with two outputs to directly generate the upper and lower bounds of PI without making any assumption about the data distribution. A new Prediction Interval Symmetry (PIS) index is introduced and a new objective function is developed for the comprehensive evaluation of PI considering their coverage probability, width and symmetry. Furthermore, Shuffled Complex Evolution algorithm (SCE-UA) is used to minimize the objective function and optimize ANN parameters in the LUBE method. The proposed method is applied to a real world flood forecasting case study of the upper Yangtze River Watershed. The result shows that the SCE-UA based LUBE method with new objective function is very efficient, meanwhile, the midpoint forecasting of the PI obtains excellent performance by evidently improving the symmetry of PI.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an estimator that has a smaller bias and better coverage probability than the traditional estimator of Kendall's τ for zero-inflated continuous distributions.

Journal ArticleDOI
TL;DR: Simulation studies in terms of coverage probability and average length of confidence intervals demonstrate this proposed JEL method has the good performance in small sample sizes.

Journal ArticleDOI
TL;DR: This work proposes a new method to estimate the group mean consistently with the corresponding variance estimation and showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability.
Abstract: Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors propose relative magnitude and relative standard deviation stopping rules in the context of Markov chain Monte Carlo (MCMC) simulations for estimating features of a target distribution, particularly for Bayesian inference.
Abstract: Markov chain Monte Carlo (MCMC) simulations are commonly employed for estimating features of a target distribution, particularly for Bayesian inference. A fundamental challenge is determining when these simulations should stop. We consider a sequential stopping rule that terminates the simulation when the width of a confidence interval is sufficiently small relative to the size of the target parameter. Specifically, we propose relative magnitude and relative standard deviation stopping rules in the context of MCMC. In each setting, we develop sufficient conditions for asymptotic validity, that is conditions to ensure the simulation will terminate with probability one and the resulting confidence intervals will have the proper coverage probability. Our results are applicable in a wide variety of MCMC estimation settings, such as expectation, quantile, or simultaneous multivariate estimation. Finally, we investigate the finite sample properties through a variety of examples and provide some recommendations to practitioners.

Journal ArticleDOI
TL;DR: It is analytically shown that for both FFR and SFR, the optimal SIR threshold is equal to the target SIR T, i.e, S t = T, and it is shown that FFR achieves a higher coverage than SFR at the optimal value of S t.
Abstract: In this work, the authors derive the optimal signal-to-interference-ratio (SIR) thresholds S t which maximise coverage probability for both fractional frequency reuse (FFR) and soft frequency reuse (SFR) networks with base station locations modelled using Poisson point process. It is analytically shown that for both FFR and SFR, the optimal SIR threshold is equal to the target SIR T, i.e, S t = T. The authors also show that at the optimal SIR threshold, FFR achieves a higher coverage than frequency reuse (1/Δ). Furthermore, for the cases when S t > T and S t T, SFR achieves a higher coverage than reuse 1, and when S t < T, the SFR coverage can be lower than reuse 1 coverage. The FFR and SFR coverages are also compared for a given Δ, and it is shown that FFR achieves a higher coverage than SFR at the optimal value of S t.

Journal ArticleDOI
TL;DR: In the analysis of panel data that includes a time-varying covariate, a Hausman pretest is commonly used to decide whether subsequent inference is made using the random effects model or the fixed effects model as discussed by the authors.