scispace - formally typeset
Search or ask a question

Showing papers on "Coverage probability published in 2017"


Journal ArticleDOI
TL;DR: In this paper, the authors derived the downlink coverage probability of a reference receiver located at an arbitrary position on the ground assuming Nakagami-$m$ fading for all wireless links.
Abstract: In this paper, we consider a finite network of unmanned aerial vehicles serving a given region. Modeling this network as a uniform binomial point process, we derive the downlink coverage probability of a reference receiver located at an arbitrary position on the ground assuming Nakagami- $m$ fading for all wireless links. The reference receiver is assumed to connect to its closest transmitting node as is usually the case in cellular systems. After deriving the distribution of distances from the reference receiver to the serving and interfering nodes, we derive an exact expression for downlink coverage probability in terms of the derivative of Laplace transform of interference power distribution. In the downlink of this system, it is not unusual to encounter scenarios in which the line-of-sight component is significantly stronger than the reflected multipath components. To emulate such scenarios, we also derive the coverage probability in the absence of fading from the results of Nakagami- $m$ fading by taking the limit $m \to \infty$ . Using asymptotic expansion of incomplete gamma function, we concretely show that this limit reduces to a redundant condition. Consequently, we derive an accurate coverage probability approximation for this case using dominant interferer-based approach in which the effect of dominant interferer is exactly captured and the residual interference from other interferers is carefully approximated. We then derive the bounds of the approximate coverage probability using Berry-Esseen theorem. Our analyses reveal several useful trends in coverage probability as a function of height of the transmitting nodes and the location of reference receiver on the ground.

348 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider confidence intervals for high-dimensional linear regression with random design and establish the convergence rates of the minimax expected length for confidence intervals in the oracle setting where the sparsity parameter is given.
Abstract: Confidence sets play a fundamental role in statistical inference. In this paper, we consider confidence intervals for high-dimensional linear regression with random design. We first establish the convergence rates of the minimax expected length for confidence intervals in the oracle setting where the sparsity parameter is given. The focus is then on the problem of adaptation to sparsity for the construction of confidence intervals. Ideally, an adaptive confidence interval should have its length automatically adjusted to the sparsity of the unknown regression vector, while maintaining a pre-specified coverage probability. It is shown that such a goal is in general not attainable, except when the sparsity parameter is restricted to a small region over which the confidence intervals have the optimal length of the usual parametric rate. It is further demonstrated that the lack of adaptivity is not due to the conservativeness of the minimax framework, but is fundamentally caused by the difficulty of learning the bias accurately.

155 citations


Journal ArticleDOI
TL;DR: A probability density forecasting method based on Copula theory is proposed in order to achieve the relational diagram of electrical load and real-time price and the simulation results show that the proposed method has great potential for power load forecasting by selecting appropriate kernel function for KSVQR model.

127 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide an analytical framework to analyze heterogeneous downlink millimeter-wave (mm-wave) cellular networks consisting of $K$ tiers of randomly located base stations (BSs), where each tier operates in an mm-wave frequency band.
Abstract: In this paper, we provide an analytical framework to analyze heterogeneous downlink millimeter-wave (mm-wave) cellular networks consisting of $K$ tiers of randomly located base stations (BSs), where each tier operates in an mm-wave frequency band. Signal-to-interference-plus-noise ratio (SINR) coverage probability is derived for the entire network using tools from stochastic geometry. The distinguishing features of mm-wave communications, such as directional beamforming, and having different path loss laws for line-of-sight and non-line-of-sight links are incorporated into the coverage analysis by assuming averaged biased-received power association and Nakagami fading. By using the noise-limited assumption for mm-wave networks, a simpler expression requiring the computation of only one numerical integral for coverage probability is obtained. Also, the effect of beamforming alignment errors on the coverage probability analysis is investigated to get insight on the performance in practical scenarios. Downlink rate coverage probability is derived as well to get more insights on the performance of the network. Moreover, the effect of deploying low-power smaller cells and the impact of biasing factor on energy efficiency is analyzed. Finally, a hybrid cellular network operating in both mm-wave and $\mu$ -wave frequency bands is addressed.

104 citations


Journal ArticleDOI
TL;DR: The results demonstrate that each method leads to unbiased treatment effect estimates, and based on precision of estimates, 95% coverage probability, and power, ANCOVA modeling of either change scores or post-treatment score as the outcome, prove to be the most effective.
Abstract: Often repeated measures data are summarized into pre-post-treatment measurements. Various methods exist in the literature for estimating and testing treatment effect, including ANOVA, analysis of covariance (ANCOVA), and linear mixed modeling (LMM). Under the first two methods, outcomes can either be modeled as the post treatment measurement (ANOVA-POST or ANCOVA-POST), or a change score between pre and post measurements (ANOVACHANGE, ANCOVA-CHANGE). In LMM, the outcome is modeled as a vector of responses with or without Kenward- Rogers adjustment. We consider five methods common in the literature, and discuss them in terms of supporting simulations and theoretical derivations of variance. Consistent with existing literature, our results demonstrate that each method leads to unbiased treatment effect estimates, and based on precision of estimates, 95% coverage probability, and power, ANCOVA modeling of either change scores or post-treatment score as the outcome, prove to be the most effective. We further demonstrate each method in terms of a real data example to exemplify comparisons in real clinical context.

103 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the properties of a range of commonly used frequentist and Bayesian procedures in simulation studies, and the consequences for interval estimation of the common treatment effect in random-effects meta-analysis are assessed.
Abstract: Meta-analyses in orphan diseases and small populations generally face particular problems, including small numbers of studies, small study sizes and heterogeneity of results. However, the heterogeneity is difficult to estimate if only very few studies are included. Motivated by a systematic review in immunosuppression following liver transplantation in children, we investigate the properties of a range of commonly used frequentist and Bayesian procedures in simulation studies. Furthermore, the consequences for interval estimation of the common treatment effect in random-effects meta-analysis are assessed. The Bayesian credibility intervals using weakly informative priors for the between-trial heterogeneity exhibited coverage probabilities in excess of the nominal level for a range of scenarios considered. However, they tended to be shorter than those obtained by the Knapp-Hartung method, which were also conservative. In contrast, methods based on normal quantiles exhibited coverages well below the nominal levels in many scenarios. With very few studies, the performance of the Bayesian credibility intervals is of course sensitive to the specification of the prior for the between-trial heterogeneity. In conclusion, the use of weakly informative priors as exemplified by half-normal priors (with a scale of 0.5 or 1.0) for log odds ratios is recommended for applications in rare diseases. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

98 citations


Journal ArticleDOI
TL;DR: The results show that the proposed model significantly outperform both reference models in terms of all evaluation metrics for all locations when the forecast horizon is greater than 5-min, and shows superior performance in predicting DNI ramps.

71 citations


Posted Content
TL;DR: This paper proposes a stretched exponential path loss model that is suitable for short-range communication, integrated into a downlink cellular network with base stations modeled by a Poisson point process, and derives expressions for the coverage probability, potential throughput, and area spectral efficiency.
Abstract: Distance-based attenuation is a critical aspect of wireless communications. As opposed to the ubiquitous power-law path loss model, this paper proposes a stretched exponential path loss model that is suitable for short-range communication. In this model, the signal power attenuates over a distance $r$ as $e^{-\alpha r^{\beta}}$, where $\alpha,\beta$ are tunable parameters. Using experimental propagation measurements, we show that the proposed model is accurate for short to moderate distances in the range $r \in (5,300)$ meters and so is a suitable model for dense and ultradense networks. We integrate this path loss model into a downlink cellular network with base stations modeled by a Poisson point process, and derive expressions for the coverage probability, potential throughput, and area spectral efficiency. Although the most general result for coverage probability has a double integral, several special cases are given where the coverage probability has a compact or even closed form. We then show that the potential throughput is maximized for a particular BS density and then collapses to zero for high densities, assuming a fixed SINR threshold. We next prove that the area spectral efficiency, which assumes an adaptive SINR threshold, is non-decreasing with the BS density and converges to a constant for high densities.

68 citations


Journal ArticleDOI
TL;DR: A prediction interval-based model for modeling the uncertainties of tidal current prediction based on support vector regression (SVR) and a nonparametric method called a lower upper bound estimation (LUBE) method is proposed.
Abstract: This paper proposes a prediction interval-based model for modeling the uncertainties of tidal current prediction. The proposed model constructs the optimal prediction intervals (PIs) based on support vector regression (SVR) and a nonparametric method called a lower upper bound estimation (LUBE) method. In order to increase the modeling stability of SVRs that are used in the LUBE method, the idea of combined prediction intervals is employed. As the optimization tool, a flower pollination algorithm along with a two-phase modification method is presented to optimize the SVR parameters. The proposed model employs fuzzy membership functions to provide appropriate balance between the PI coverage probability (PICP) and PI normalized average width (PINAW), independently. The performance of the proposed model is examined on the practical tidal current data collected from the Bay of Fundy, NS, Canada.

46 citations


Journal ArticleDOI
TL;DR: Heterogeneity estimators are identified that perform better than the suggested Paule-Mandel estimator and maximum likelihood provides the best performance for both types of outcome in the absence of heterogeneity.
Abstract: When we synthesize research findings via meta-analysis, it is common to assume that the true underlying effect differs across studies. Total variability consists of the within-study and between-study variances (heterogeneity). There have been established measures, such as I2 , to quantify the proportion of the total variation attributed to heterogeneity. There is a plethora of estimation methods available for estimating heterogeneity. The widely used DerSimonian and Laird estimation method has been challenged, but knowledge of the overall performance of heterogeneity estimators is incomplete. We identified 20 heterogeneity estimators in the literature and evaluated their performance in terms of mean absolute estimation error, coverage probability, and length of the confidence interval for the summary effect via a simulation study. Although previous simulation studies have suggested the Paule-Mandel estimator, it has not been compared with all the available estimators. For dichotomous outcomes, estimating heterogeneity through Markov chain Monte Carlo is a good choice if an informative prior distribution for heterogeneity is employed (eg, by published Cochrane reviews). Nonparametric bootstrap and positive DerSimonian and Laird perform well for all assessment criteria for both dichotomous and continuous outcomes. Hartung-Makambi estimator can be the best choice when the heterogeneity values are close to 0.07 for dichotomous outcomes and medium heterogeneity values (0.01 , 0.05) for continuous outcomes. Hence, there are heterogeneity estimators (nonparametric bootstrap DerSimonian and Laird and positive DerSimonian and Laird) that perform better than the suggested Paule-Mandel. Maximum likelihood provides the best performance for both types of outcome in the absence of heterogeneity.

41 citations


Journal ArticleDOI
TL;DR: In this article, the authors construct confidence sets as credible balls with respect to the empirical Bayes posterior resulting from a certain two-level hierarchical prior, which is characterized by the contraction rate which is allowed to be local, that is, depending on the parameter.
Abstract: In the mildly ill-posed inverse signal-in-white-noise model, we construct confidence sets as credible balls with respect to the empirical Bayes posterior resulting from a certain two-level hierarchical prior. The quality of the posterior is characterized by the contraction rate which we allow to be local, that is, depending on the parameter. The issue of optimality of the constructed confidence sets is addressed via a trade-off between its “size” (the local radial rate) and its coverage probability. We introduce excessive bias restriction (EBR), more general than self-similarity and polished tail condition recently studied in the literature. Under EBR, we establish the confidence optimality of our credible set with some local (oracle) radial rate. We also derive the oracle estimation inequality and the oracle posterior contraction rate. The obtained local results are more powerful than global: adaptive minimax results for a number of smoothness scales follow as consequence, in particular, the ones considered by Szabo et al. [Ann. Statist. 43 (2015) 1391–1428].

Proceedings ArticleDOI
01 May 2017
TL;DR: It is shown that in multi-lane V2I networks, blockage among vehicles is not significant and deploying more BSs does not increase coverage probability efficiently in ultra-dense streets.
Abstract: Millimeter wave (mmWave) communication offers Gbps data transmission, which can support massive data sharing in vehicle-to-infrastructure (V2I) networks. In this paper, we analyze the blockage effects among different vehicles and coverage probability of a typical receiver, considering cross street BSs near urban intersections in a multi-lane mmWave vehicular network. First, a three-dimensional model of blockage among vehicles on different lanes is considered. Second, we compute the coverage probability considering the interference of cross street base stations. Incorporating the blockage effects, we derive an exact and semi closed-form expression of the cumulative distribution density (CDF) of the association link path gain. Then, a tight approximation of the coverage probability is computed. We provide numerical results to verify the accuracy of the analytic results. We demonstrate the effects of blockage and the cross street interference. Also, we compare coverage probability with different BSs intensities under various street settings. It is shown that in multi-lane V2I networks, blockage among vehicles is not significant. Also, deploying more BSs does not increase coverage probability efficiently in ultra-dense streets.

Journal ArticleDOI
TL;DR: Confidence intervals for the single coefficient of variation and the difference of coefficients of variation in the two-parameter exponential distributions are examined using the method of variance of estimates recovery (MOVER), the generalized confidence interval (GCI), and the asymptotic confidence intervals (ACI).
Abstract: This article examines confidence intervals for the single coefficient of variation and the difference of coefficients of variation in the two-parameter exponential distributions, using the method of variance of estimates recovery (MOVER), the generalized confidence interval (GCI), and the asymptotic confidence interval (ACI). In simulation, the results indicate that coverage probabilities of the GCI maintain the nominal level in general. The MOVER performs well in terms of coverage probability when data only consist of positive values, but it has wider expected length. The coverage probabilities of the ACI satisfy the target for large sample sizes. We also illustrate our confidence intervals using a real-world example in the area of medical science.

Journal ArticleDOI
TL;DR: A general downlink model with zero-forcing precoding, applied in realistic heterogeneous cellular systems with multiple-antenna base stations (BSs), takes into consideration imperfect CSIT due to pilot contamination, channel aging due to users relative movement, and unavoidable residual additive transceiver hardware impairments (RATHIs).
Abstract: Given the critical dependence of broadcast channels by the accuracy of channel state information at the transmitter (CSIT), we develop a general downlink model with zero-forcing precoding, applied in realistic heterogeneous cellular systems with multiple-antenna base stations (BSs) Specifically, we take into consideration imperfect CSIT due to pilot contamination, channel aging due to users relative movement, and unavoidable residual additive transceiver hardware impairments (RATHIs) Assuming that the BSs are Poisson distributed, the main contributions focus on the derivations of the upper bound of the coverage probability and the achievable user rate for this general model We show that both the coverage probability and the user rate are dependent on the imperfect CSIT and RATHIs More concretely, we quantify the resultant performance loss of the network due to these effects We depict that the uplink RATHIs have equal impact, but the downlink transmit BS distortion has a greater impact than the receive hardware impairment of the user Thus, the transmit BS hardware should be of better quality than user's receive hardware Furthermore, we characterise both the coverage probability and user rate in terms of the time variation of the channel It is shown that both of them decrease with increasing user mobility, but after a specific value of the normalized Doppler shift, they increase again Actually, the time variation, following the Jakes autocorrelation function, mirrors this effect on coverage probability and user rate Finally, we consider space-division multiple access (SDMA), single-user beamforming (SU-BF), and baseline single-input single-output transmission A comparison among these schemes reveals that the coverage by means of SU-BF outperforms SDMA in terms of coverage

Journal ArticleDOI
TL;DR: In this paper, the authors develop methods for constructing some important statistical limits of a gamma distribution, such as upper prediction limits and tolerance limits, for at least p of m m m of the gamma distribution.
Abstract: This study develops methods for constructing some important statistical limits of a gamma distribution. First, we construct upper prediction limits and tolerance limits for a gamma distribution. In addition, upper prediction limits for at least p of m m..

Journal ArticleDOI
TL;DR: It is demonstrated that Monte Carlo sensitivity analysis can give inaccurate uncertainty assessments that do not reflect the data's influence on uncertainty about unmeasured confounding, and recommended that analysts use BSA for probabilistic sensitivity analysis.
Abstract: Bias from unmeasured confounding is a persistent concern in observational studies, and sensitivity analysis has been proposed as a solution. In the recent years, probabilistic sensitivity analysis using either Monte Carlo sensitivity analysis (MCSA) or Bayesian sensitivity analysis (BSA) has emerged as a practical analytic strategy when there are multiple bias parameters inputs. BSA uses Bayes theorem to formally combine evidence from the prior distribution and the data. In contrast, MCSA samples bias parameters directly from the prior distribution. Intuitively, one would think that BSA and MCSA ought to give similar results. Both methods use similar models and the same (prior) probability distributions for the bias parameters. In this paper, we illustrate the surprising finding that BSA and MCSA can give very different results. Specifically, we demonstrate that MCSA can give inaccurate uncertainty assessments (e.g. 95% intervals) that do not reflect the data's influence on uncertainty about unmeasured confounding. Using a data example from epidemiology and simulation studies, we show that certain combinations of data and prior distributions can result in dramatic prior-to-posterior changes in uncertainty about the bias parameters. This occurs because the application of Bayes theorem in a non-identifiable model can sometimes rule out certain patterns of unmeasured confounding that are not compatible with the data. Consequently, the MCSA approach may give 95% intervals that are either too wide or too narrow and that do not have 95% frequentist coverage probability. Based on our findings, we recommend that analysts use BSA for probabilistic sensitivity analysis. Copyright © 2017 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The results show that decreasing and/or increasing the activity factor and the path-loss compensation factor reduce the CP variation around the spatially averaged value.
Abstract: This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP) for modeling the spatial locations of base stations (BSs), we obtain the percentiles of users that achieve a target uplink CP over an arbitrary, but fixed, realization of the PPP. To this end, the effect of the users activity factor ( $p$ ) and the path-loss compensation factor ( $\epsilon $ ) on the uplink performance are analyzed. The results show that decreasing $p$ and/or increasing $\epsilon $ reduce the CP variation around the spatially averaged value.

Posted Content
TL;DR: In this article, the authors apply the Bayesian model averaging (BMA) method to constant stress accelerated degradation testing (ADT), and show that degradation model uncertainty has significant effects on the p-quantile lifetime at the use conditions, especially for extreme quantiles.
Abstract: —In accelerated degradation testing (ADT), test data from higher than normal stress conditions are used to find stochas-tic models of degradation, e.g., Wiener process, Gamma process, and inverse Gaussian process models. In general, the selection of the degradation model is made with reference to one specific product and no consideration is given to model uncertainty. In this paper, we address this issue and apply the Bayesian model averaging (BMA) method to constant stress ADT. For illustration, stress relaxation ADT data are analyzed. We also make a simulation study to compare the s-credibility intervals for single model and BMA. The results show that degradation model uncertainty has significant effects on the p-quantile lifetime at the use conditions, especially for extreme quantiles. The BMA can well capture this uncertainty and compute compromise s-credibility intervals with the highest coverage probability at each quantile.

Journal ArticleDOI
TL;DR: Approaches to address possible inefficiency in estimation resulting from survey weighting are reviewed, including methods derived from both the design and modelbased perspectives.
Abstract: In sample surveys, the sample units are typically chosen using a complex design. This may lead to a selection effect and, if uncorrected in the analysis, may lead to biased inferences. To mitigate the effect on inferences of deviations from a simple random sample a common technique is to use survey weights in the analysis. This article reviews approaches to address possible inefficiency in estimation resulting from such weighting. To improve inferences we emphasize modifications of the basic designbased weight, that is, the inverse of a unit’s inclusion probability. These techniques include weight trimming, weight modelling and incorporating weights via models for survey variables.We start with an introduction to survey weighting, including methods derived from both the design and modelbased perspectives. Then we present the rationale and a taxonomy of methods for modifying the weights. We next describe an extensive numerical study to compare these methods. Using as the criteria relative bias, relative mean square error, confidence or credible interval width and coverage probability, we compare the alternative methods and summarize our findings. To supplement this numerical study we use Texas school data to compare the distributions of the weights for several methods.We also make general recommendations, describe limitations of our numerical study and make suggestions for further investigation.

Journal ArticleDOI
TL;DR: Mancl and DeRouen’s covariance estimator with compound symmetry, first-order autoregressive, heterogeneous AR(1), and antedependence structures performed better than the original sandwich estimator and Kauermann and Carroll‘s estimator in the scenarios where the variance increased across visits.
Abstract: In longitudinal clinical trials, some subjects will drop out before completing the trial, so their measurements towards the end of the trial are not obtained. Mixed-effects models for repeated measures (MMRM) analysis with "unstructured" (UN) covariance structure are increasingly common as a primary analysis for group comparisons in these trials. Furthermore, model-based covariance estimators have been routinely used for testing the group difference and estimating confidence intervals of the difference in the MMRM analysis using the UN covariance. However, using the MMRM analysis with the UN covariance could lead to convergence problems for numerical optimization, especially in trials with a small-sample size. Although the so-called sandwich covariance estimator is robust to misspecification of the covariance structure, its performance deteriorates in settings with small-sample size. We investigated the performance of the sandwich covariance estimator and covariance estimators adjusted for small-sample bias proposed by Kauermann and Carroll ( J Am Stat Assoc 2001; 96: 1387-1396) and Mancl and DeRouen ( Biometrics 2001; 57: 126-134) fitting simpler covariance structures through a simulation study. In terms of the type 1 error rate and coverage probability of confidence intervals, Mancl and DeRouen's covariance estimator with compound symmetry, first-order autoregressive (AR(1)), heterogeneous AR(1), and antedependence structures performed better than the original sandwich estimator and Kauermann and Carroll's estimator with these structures in the scenarios where the variance increased across visits. The performance based on Mancl and DeRouen's estimator with these structures was nearly equivalent to that based on the Kenward-Roger method for adjusting the standard errors and degrees of freedom with the UN structure. The model-based covariance estimator with the UN structure under unadjustment of the degrees of freedom, which is frequently used in applications, resulted in substantial inflation of the type 1 error rate. We recommend the use of Mancl and DeRouen's estimator in MMRM analysis if the number of subjects completing is ( n + 5) or less, where n is the number of planned visits. Otherwise, the use of Kenward and Roger's method with UN structure should be the best way.

Journal ArticleDOI
TL;DR: A mediation formula approach in which simple parametric models are utilized to approximate the baseline log cumulative hazard function and results demonstrate low bias of the mediation effect estimators and close-to-nominal coverage probability of the confidence intervals for a wide range of complex hazard shapes.
Abstract: Summary An important problem within the social, behavioural and health sciences is how to partition an exposure effect (e.g. treatment or risk factor) among specific pathway effects and to quantify the importance of each pathway. Mediation analysis based on the potential outcomes framework is an important tool to address this problem and we consider the estimation of mediation effects for the proportional hazards model. We give precise definitions of the total effect, natural indirect effect and natural direct effect in terms of the survival probability, hazard function and restricted mean survival time within the standard two-stage mediation framework. To estimate the mediation effects on different scales, we propose a mediation formula approach in which simple parametric models (fractional polynomials or restricted cubic splines) are utilized to approximate the baseline log-cumulative-hazard function. Simulation study results demonstrate low bias of the mediation effect estimators and close-to-nominal coverage probability of the confidence intervals for a wide range of complex hazard shapes. We apply this method to the Jackson heart study data and conduct a sensitivity analysis to assess the effect on the mediation effects inference when the no unmeasured mediator–outcome confounding assumption is violated.

Journal ArticleDOI
TL;DR: The Buehler method is applied to obtain exact confidence intervals based on four widely used asymptotic intervals, three Wald-type confidence intervals and one interval constructed from a profile variance for Cohen’s kappa coefficient in the case of two raters and binary items.
Abstract: Cohen's kappa coefficient, κ, is a statistical measure of inter-rater agreement or inter-annotator agreement for qualitative items. In this paper, we focus on interval estimation of κ in the case of two raters and binary items. So far, only asymptotic and bootstrap intervals are available for κ due to its complexity. However, there is no guarantee that such intervals will capture κ with the desired nominal level 1- α. In other words, the statistical inferences based on these intervals are not reliable. We apply the Buehler method to obtain exact confidence intervals based on four widely used asymptotic intervals, three Wald-type confidence intervals and one interval constructed from a profile variance. These exact intervals are compared with regard to coverage probability and length for small to medium sample sizes. The exact intervals based on the Garner interval and the Lee and Tu interval are generally recommended for use in practice due to good performance in both coverage probability and length.

Journal ArticleDOI
TL;DR: The findings in this paper shed light on several important aspects of dense MIMO HetNets: first, increasing the multiplexing gains yields lower coverage performance; second, densifying network by installing an excessive number of low-power femto BSs allows the growth of the severalxing gain of high-power, low-density macro-BSs without compromising the coverage performance.
Abstract: We study the coverage performance of multiantenna [multiple-input multiple-output (MIMO)] communications in heterogeneous networks (HetNets). Our main focus is on open-loop and multistream MIMO zero-forcing beamforming at the receiver. Network coverage is evaluated adopting tools from stochastic geometry. Besides fixed-rate transmission (FRT), we also consider adaptive-rate transmission (ART) while its coverage performance, despite its high relevance, has so far been overlooked. On the other hand, while the focus of the existing literature has solely been on the evaluation of coverage probability per stream, we target coverage probability per communication link—comprising multiple streams—which is shown to be a more conclusive performance metric in multistream MIMO systems. This, however, renders various analytical complexities rooted in statistical dependence among streams in each link. Using a rigorous analysis, we provide closed-form bounds on the coverage performance for FRT and ART. These bounds explicitly capture impacts of various system parameters including densities of BSs, SIR thresholds, and multiplexing gains. Our analytical results are further shown to cover popular closed-loop MIMO systems, such as eigen-beamforming and space-division multiple access. The accuracy of our analysis is confirmed by extensive simulations. The findings in this paper shed light on several important aspects of dense MIMO HetNets: first, increasing the multiplexing gains yields lower coverage performance; second, densifying network by installing an excessive number of low-power femto BSs allows the growth of the multiplexing gain of high-power, low-density macro-BSs without compromising the coverage performance; and third, for dense HetNets, the coverage probability does not increase with the increase of deployment densities.

Journal ArticleDOI
TL;DR: This study suggests to use the semiparametric or parametric approaches to estimate AR as a function of time in cohort studies if the proportional hazards assumption appears appropriate.
Abstract: The attributable risk (AR) measures the proportion of disease cases that can be attributed to an exposure in the population. Several definitions and estimation methods have been proposed for survival data. Using simulations, we compared four methods for estimating AR defined in terms of survival functions: two nonparametric methods based on Kaplan-Meier’s estimator, one semiparametric based on Cox’s model, and one parametric based on the piecewise constant hazards model, as well as one simpler method based on estimated exposure prevalence at baseline and Cox’s model hazard ratio. We considered a fixed binary exposure with varying exposure probabilities and strengths of association, and generated event times from a proportional hazards model with constant or monotonic (decreasing or increasing) Weibull baseline hazard, as well as from a nonproportional hazards model. We simulated 1,000 independent samples of size 1,000 or 10,000. The methods were compared in terms of mean bias, mean estimated standard error, empirical standard deviation and 95% confidence interval coverage probability at four equally spaced time points. Under proportional hazards, all five methods yielded unbiased results regardless of sample size. Nonparametric methods displayed greater variability than other approaches. All methods showed satisfactory coverage except for nonparametric methods at the end of follow-up for a sample size of 1,000 especially. With nonproportional hazards, nonparametric methods yielded similar results to those under proportional hazards, whereas semiparametric and parametric approaches that both relied on the proportional hazards assumption performed poorly. These methods were applied to estimate the AR of breast cancer due to menopausal hormone therapy in 38,359 women of the E3N cohort. In practice, our study suggests to use the semiparametric or parametric approaches to estimate AR as a function of time in cohort studies if the proportional hazards assumption appears appropriate.

Journal ArticleDOI
TL;DR: In this article, the exact coverage and expected length properties of the model averaged tail area (MATA) confidence interval were investigated in the context of two nested nested models, where the average tail area was assumed to be a Gaussian distribution.
Abstract: We investigate the exact coverage and expected length properties of the model averaged tail area (MATA) confidence interval proposed by Turek and Fletcher, CSDA, 2012, in the context of two nested,...

Journal ArticleDOI
TL;DR: This work proposes two closed-form approximate confidence intervals (CIs), one is based on the method of variance estimate recovery (MOVER), and another isbased on the fiducial approach, which are very satisfactory in terms of coverage properties even for small samples, and better than other CIs for small to moderate samples.
Abstract: The problem of estimating the ratio of coefficients of variation of two independent lognormal populations is considered. We propose two closed-form approximate confidence intervals (CIs), one is based on the method of variance estimate recovery (MOVER), and another is based on the fiducial approach. The proposed CIs are compared with another CI available in the literature. Our new confidence intervals are very satisfactory in terms of coverage properties even for small samples, and better than other CIs for small to moderate samples. The methods are illustrated using an example.

Journal ArticleDOI
TL;DR: Results show that the proposed error correction approach can do improve the prediction accuracy and the proposed prediction intervals estimation approach is reliable.

Journal ArticleDOI
TL;DR: Numerical analysis confirms that the coverage probability and spectral efficiency of a cognitive radio network with a single-tier uplink model based on a stochastic geometry framework is higher and the spectral efficiency is higher when the cognitive receiver is outside the primary exclusion region.

Journal ArticleDOI
TL;DR: In this article, the authors derive an analytic expression for the n − 1 term, which may be used to calibrate the nominal coverage level to get O ( n − 3 / 2 [ log ( n ) ] 3 ) coverage error.

Journal ArticleDOI
28 Jul 2017
TL;DR: In this paper, the authors proposed confidence intervals for a single mean and difference of two means of normal distributions with unknown coefficients of variation (CVs), which were compared with existing confidence interval for the single normal mean based on the Student's t-distribution (small sample size case) and the z-distigmoid distribution (large sample size) using Monte Carlo simulation.
Abstract: This paper proposes confidence intervals for a single mean and difference of two means of normal distributions with unknown coefficients of variation (CVs). The generalized confidence interval (GCI) approach and large sample (LS) approach were proposed to construct confidence intervals for the single normal mean with unknown CV. These confidence intervals were compared with existing confidence interval for the single normal mean based on the Student’s t-distribution (small sample size case) and the z-distribution (large sample size case). Furthermore, the confidence intervals for the difference between two normal means with unknown CVs were constructed based on the GCI approach, the method of variance estimates recovery (MOVER) approach and the LS approach and then compared with the Welch–Satterthwaite (WS) approach. The coverage probability and average length of the proposed confidence intervals were evaluated via Monte Carlo simulation. The results indicated that the GCIs for the single normal mean and the difference of two normal means with unknown CVs are better than the other confidence intervals. Finally, three datasets are given to illustrate the proposed confidence intervals.