scispace - formally typeset
Search or ask a question

Showing papers on "Coverage probability published in 2019"


Journal ArticleDOI
TL;DR: In this article, the authors developed an analytical framework to derive the meta distribution and moments of the conditional success probability (CSP), defined as success probability for a given realization of the transmitters, in large-scale co-channel uplink and downlink non-orthogonal multiple access (NOMA) networks with one NOMA cluster per cell.
Abstract: We develop an analytical framework to derive the meta distribution and moments of the conditional success probability (CSP), which is defined as success probability for a given realization of the transmitters, in large-scale co-channel uplink and downlink non-orthogonal multiple access (NOMA) networks with one NOMA cluster per cell. The moments of CSP translate to various network performance metrics such as the standard success or signal-to-interference ratio (SIR) coverage probability (which is the 1-st moment), the mean local delay (which is the −1st moment in a static network setting), and the meta distribution (which is the complementary cumulative distribution function of the success or SIR coverage probability and can be approximated by using the 1st and 2nd moments). For the uplink NOMA network, to make the framework tractable, we propose two point process models for the spatial locations of the inter-cell interferers by utilizing the base station (BS)/user pair correlation function. We validate the proposed models by comparing the second moment measure of each model with that of the actual point process for the inter-cluster (or inter-cell) interferers obtained via simulations. For downlink NOMA, we derive closed-form solutions for the moments of the CSP, success (or coverage) probability, mean local delay, and meta distribution for the users. As an application of the developed analytical framework, we use the closed-form expressions to optimize the power allocations for downlink NOMA users in order to maximize the success probability of a given NOMA user with and without latency constraints. Closed-form optimal solutions for the transmit powers are obtained for two-user NOMA scenario. We note that maximizing the success probability with latency constraints can significantly impact the optimal power solutions for low SIR thresholds and favor orthogonal multiple access.

78 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compared the performance of different meta-analysis methods, including the DerSimonian-Laird approach, empirically and in a simulation study, based on few studies, imbalanced study sizes, and considering odds-ratio and risk ratio (RR) effect sizes.
Abstract: Standard random-effects meta-analysis methods perform poorly when applied to few studies only. Such settings however are commonly encountered in practice. It is unclear, whether or to what extent small-sample-size behaviour can be improved by more sophisticated modeling. We consider likelihood-based methods, the DerSimonian-Laird approach, Empirical Bayes, several adjustment methods and a fully Bayesian approach. Confidence intervals are based on a normal approximation, or on adjustments based on the Student-t-distribution. In addition, a linear mixed model and two generalized linear mixed models (GLMMs) assuming binomial or Poisson distributed numbers of events per study arm are considered for pairwise binary meta-analyses. We extract an empirical data set of 40 meta-analyses from recent reviews published by the German Institute for Quality and Efficiency in Health Care (IQWiG). Methods are then compared empirically as well as in a simulation study, based on few studies, imbalanced study sizes, and considering odds-ratio (OR) and risk ratio (RR) effect sizes. Coverage probabilities and interval widths for the combined effect estimate are evaluated to compare the different approaches. Empirically, a majority of the identified meta-analyses include only 2 studies. Variation of methods or effect measures affects the estimation results. In the simulation study, coverage probability is, in the presence of heterogeneity and few studies, mostly below the nominal level for all frequentist methods based on normal approximation, in particular when sizes in meta-analyses are not balanced, but improve when confidence intervals are adjusted. Bayesian methods result in better coverage than the frequentist methods with normal approximation in all scenarios, except for some cases of very large heterogeneity where the coverage is slightly lower. Credible intervals are empirically and in the simulation study wider than unadjusted confidence intervals, but considerably narrower than adjusted ones, with some exceptions when considering RRs and small numbers of patients per trial-arm. Confidence intervals based on the GLMMs are, in general, slightly narrower than those from other frequentist methods. Some methods turned out impractical due to frequent numerical problems. In the presence of between-study heterogeneity, especially with unbalanced study sizes, caution is needed in applying meta-analytical methods to few studies, as either coverage probabilities might be compromised, or intervals are inconclusively wide. Bayesian estimation with a sensibly chosen prior for between-trial heterogeneity may offer a promising compromise.

68 citations


Journal ArticleDOI
TL;DR: The numerical results indicate that the coverage probability with the multi-directional path loss model is less than that with the isotropic path Loss model, and the association probability with long link distance increases obviously with the increase of the effect of anisotropic path loss in 5G fractal small cell networks.
Abstract: It is anticipated that a considerably higher network capacity will be achieved by the fifth generation (5G) small cell networks incorporated with the millimeter wave (mm-wave) technology. However, the mm-wave signals are more sensitive to blockages than signals in lower frequency bands, which highlight the effect of anisotropic path loss in network coverage. According to the fractal characteristics of cellular coverage, a multi-directional path loss model is proposed for the 5G small cell networks, where different directions are subject to different path loss exponents. Furthermore, the coverage probability, association probability, and the handoff probability are derived for the 5G fractal small cell networks based on the proposed multi-directional path loss model. The numerical results indicate that the coverage probability with the multi-directional path loss model is less than that with the isotropic path loss model, and the association probability with long link distance, e.g. , 150m, increases obviously with the increase of the effect of anisotropic path loss in 5G fractal small cell networks. Moreover, it is observed that the anisotropic propagation environment is having a profound impact on the handoff performance. Meanwhile, we could conclude that the resulting heavy handoff overhead is emerging as a new challenge for 5G fractal small cell networks.

61 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduced a new transformed model, called the unit-Gompertz (UG) distribution which exhibit right-skewed (unimodal) and reversed-J shaped density while the hazard rate has constant, increasing, upside-down bathtub and then bathtub shaped hazard rate.
Abstract: The transformed family of distributions are sometimes very useful to explore additional properties of the phenomenons which non-transformed (baseline) family of distributions cannot. In this paper, we introduce a new transformed model, called the unit-Gompertz (UG) distribution which exhibit right-skewed (unimodal) and reversed-J shaped density while the hazard rate has constant, increasing, upside-down bathtub and then bathtub shaped hazard rate. Some statistical properties of this new distribution are presented and discussed. Maximum likelihood estimation for the parameters that index UG distribution are derived along with their corresponding asymptotic standard errors. Monte Carlo simulations are conducted to investigate the bias, root mean squared error of the maximum likelihood estimators as well as the coverage probability. Finally, the potentiality of the model is presented and compared with three others distributions using two real data sets.

56 citations


Journal ArticleDOI
TL;DR: A scalar tuning parameter is introduced that controls the posterior distribution spread, and a Monte Carlo algorithm is developed that sets this parameter so that the corresponding credible region achieves the nominal frequentist coverage probability.
Abstract: An advantage of methods that base inference on a posterior distribution is that credible regions are readily obtained. Except in well-specified situations, however, there is no guarantee that such regions will achieve the nominal frequentist coverage probability, even approximately. To overcome this difficulty, we propose a general strategy that introduces an additional scalar tuning parameter to control the posterior spread, and we develop an algorithm that chooses this parameter so that the corresponding credible region achieves the nominal coverage probability.

49 citations


Journal ArticleDOI
TL;DR: Fuzzy and neural network prediction interval models are developed based on fuzzy numbers by minimizing a novel criterion that includes the coverage probability and normalized average width and show that the proposed models are suitable alternatives to electrical consumption forecasting because they obtain the minimum interval widths that characterize the uncertainty of this type of stochastic process.
Abstract: Prediction interval modelling has been proposed in the literature to characterize uncertain phenomena and provide useful information from a decision-making point of view. In most of the reported studies, assumptions about the data distribution are made and/or the models are trained at one step ahead, which can decrease the quality of the interval in terms of the information about the uncertainty modelled for a higher prediction horizon. In this paper, a new prediction interval modelling methodology based on fuzzy numbers is proposed to solve the abovementioned drawbacks. Fuzzy and neural network prediction interval models are developed based on this proposed methodology by minimizing a novel criterion that includes the coverage probability and normalized average width. The fuzzy number concept is considered because the affine combination of fuzzy numbers generates, by definition, prediction intervals that can handle uncertainty without requiring assumptions about the data distribution. The developed models are compared with a covariance-based prediction interval method, and high-quality intervals are obtained, as determined by the narrower interval width of the proposed method. Additionally, the proposed prediction intervals are tested by forecasting up to two days ahead of the load of the Huatacondo microgrid in the north of Chile and the consumption of the residential dwellings in the town of Loughborough, UK. The results show that the proposed models are suitable alternatives to electrical consumption forecasting because they obtain the minimum interval widths that characterize the uncertainty of this type of stochastic process. Furthermore, the information provided by the obtained prediction interval could be used to develop robust energy management systems that, for example, consider the worst-case scenario.

49 citations


Journal ArticleDOI
TL;DR: This paper develops an analytical framework for the evaluation of the coverage probability, or equivalently the complementary cumulative density function (CCDF) of signal-to-interference-and-noise ratio (SINRinline-formula> distribution, which was not possible using the existing PPP-based models.
Abstract: Owing to its flexibility in modeling real-world spatial configurations of users and base stations (BSs), the Poisson cluster process (PCP) has recently emerged as an appealing way to model and analyze heterogeneous cellular networks (HetNets). Despite its undisputed relevance to HetNets—corroborated by the models used in the industry—the PCP’s use in performance analysis has been limited. This is primarily because of the lack of analytical tools to characterize the performance metrics, such as the coverage probability of a user connected to the strongest BS. In this paper, we develop an analytical framework for the evaluation of the coverage probability, or equivalently the complementary cumulative density function (CCDF) of signal-to-interference-and-noise ratio ( SINR ), of a typical user in a $K$ -tier HetNet under a $\max $ power-based association strategy, where the BS locations of each tier follow either a Poisson point process (PPP) or a PCP. The key enabling step involves conditioning on the parent PPPs of all the PCPs, which allows us to express the coverage probability as a product of sum-product and probability generating functionals (PGFLs) of the parent PPPs. In addition to several useful insights, our analysis provides a rigorous way to study the impact of the cluster size on the ${\it SINR}$ distribution, which was not possible using the existing PPP-based models.

47 citations


Journal ArticleDOI
TL;DR: A novel fuzzy interval prediction model (FIPM) based on the lower upper bound estimation method and a novel inter type-2 (IT-2) fuzzy model is designed to construct the lower and upper bounds of the prediction interval (PI).
Abstract: Due to the intermittent and random nature of wind energy, wind power interval prediction (WPIP) is important to weaken the uncertainty and support the planning and scheduling of the power system. To improve the quality of WPIP, a novel fuzzy interval prediction model (FIPM) based on the lower upper bound estimation method is proposed in this paper. In the frame of the FIPM, a novel inter type-2 (IT-2) fuzzy model is designed to construct the lower and upper bounds of the prediction interval (PI), in which an IT-2 fuzzy c-regression algorithm is used to partition the data space and identify the fuzzy model. The gravitational search algorithm is employed to optimize the FIPM by minimizing coverage width-based criterion to reach a tradeoff between the interval width and coverage probability. In order to verify the effectiveness of the proposed method, existing interval prediction approaches are adopted in comparative experiments with 17 datasets extracted from five wind fields. The experimental results show that the proposed IT-2 FIPM achieves significant performance with huge promotion in the quality of the PI compared to the traditional forecasting models.

43 citations


Journal ArticleDOI
TL;DR: The results show that the probability density prediction model proposed can effectively describe the uncertainty of wind and solar power, and also provide technical support for the safe and stable operation of the power system.

40 citations


Journal ArticleDOI
TL;DR: In this paper, point prediction and prediction intervals (PIs) of ANN based downscaling for mean monthly precipitation and temperature of two stations (Tabriz and Ardabil in North West of Iran) were evaluated using general circulation models (GCMs).

35 citations


Journal ArticleDOI
TL;DR: In this article, a generalized linear regression analysis with compositional covariates is proposed, where a group of linear constraints on regression coefficients are imposed to account for the compositional nature of the data and to achieve subcompositional coherence.
Abstract: Motivated by regression analysis for microbiome compositional data, this article considers generalized linear regression analysis with compositional covariates, where a group of linear constraints on regression coefficients are imposed to account for the compositional nature of the data and to achieve subcompositional coherence. A penalized likelihood estimation procedure using a generalized accelerated proximal gradient method is developed to efficiently estimate the regression coefficients. A de-biased procedure is developed to obtain asymptotically unbiased and normally distributed estimates, which leads to valid confidence intervals of the regression coefficients. Simulations results show the correctness of the coverage probability of the confidence intervals and smaller variances of the estimates when the appropriate linear constraints are imposed. The methods are illustrated by a microbiome study in order to identify bacterial species that are associated with inflammatory bowel disease (IBD) and to predict IBD using fecal microbiome.

Journal ArticleDOI
TL;DR: It is concluded that the DeLong variance estimator is a reliable option regardless of the scenario, but confidence intervals should be constructed using the logit scale to avoid values above 1 or below 0 and the poor coverage probability that follows.
Abstract: The Mann-Whitney test is a commonly used non-parametric alternative of the two-sample t-test. Despite its frequent use, it is only rarely accompanied with confidence intervals of an effect size. If reported, the effect size is usually measured with the difference of medians or the shift of the two distribution locations. Neither of these two measures directly coincides with the test statistic of the Mann-Whitney test, so the interpretation of the test results and the confidence intervals may be importantly different. In this paper, we focus on the probability that random variable X is lower than random variable Y. This measure is often referred to as the degree of overlap or the probabilistic index; it is in one-to-one relationship with the Mann-Whitney test statistic. The measure equals the area under the ROC curve. Several methods have been proposed for the construction of the confidence interval for this measure, and we review the most promising ones and explain their ideas. We study the properties of different variance estimators and small sample problems of confidence intervals construction. We identify scenarios in which the existing approaches yield inadequate coverage probabilities. We conclude that the DeLong variance estimator is a reliable option regardless of the scenario, but confidence intervals should be constructed using the logit scale to avoid values above 1 or below 0 and the poor coverage probability that follows. A correction is needed for the case when all values from one sample are smaller than the values of the other. We propose a method that improves the coverage probability also in these cases.

Journal ArticleDOI
TL;DR: In this article, the performance of a non-orthogonal multiple access (NOMA) system where users are ranked according to their distances instead of instantaneous channel gains is analyzed.
Abstract: We characterize the accuracy of analyzing the performance of a non-orthogonal multiple access (NOMA) system where users are ranked according to their distances instead of instantaneous channel gains, i.e., product of their distance-based path-loss and fading channel gains. Distance-based ranking of users is analytically tractable and can lead to important insights. However, it may not be appropriate in a multipath fading environment where a near user suffers from severe fading while a far user experiences weak fading. Since the ranking of users (and in turn interferers) in an NOMA system has a direct impact on coverage probability analysis, the impact of the traditional distance-based ranking, as opposed to instantaneous signal power-based ranking, needs to be understood. This will enable us to identify scenarios where distance-based ranking, which is easier to implement compared with instantaneous signal power-based ranking, is acceptable for the system performance analysis. To this end, in this paper, we derive the probability of the event when distance-based ranking yields the same results as instantaneous signal power-based ranking, which is referred to as the accuracy probability . We characterize the probability of accuracy considering Nakagami- $m$ fading channels and three different spatial distribution models of user locations in NOMA, namely, the Poisson point process (PPP), the Matern cluster process (MCP), and the Thomas cluster process (TCP). For all these models of users’ locations, we assume that the spatial locations of the base stations (BSs) follow a homogeneous PPP. We show that the accuracy probability decreases with the increasing number of users and increases with the path-loss exponent. In addition, through examples, we illustrate the impact of accuracy probability on uplink and downlink coverage probabilities. Closed-form expressions are presented for the Rayleigh fading environment. The effects of fading severity and users’ pairing on the accuracy probability are also investigated.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the LASSO-QR method can construct more accurate PI and obtain more precise probability density forecasting results than quantile regression (QR).

Journal ArticleDOI
TL;DR: The PoSI intervals are generalized to post-model-selection predictors in linear regression, and their applications in inference and model selection are considered.
Abstract: We consider inference post-model-selection in linear regression. In this setting, Berk et al.(2013a) recently introduced a class of confidence sets, the so-called PoSI intervals, that cover a certain non-standard quantity of interest with a user-specified minimal coverage probability, irrespective of the model selection procedure that is being used. In this paper, we generalize the PoSI intervals to post-model-selection predictors.

Journal ArticleDOI
TL;DR: In this article, the statistical error of the variance-to-mean ratio, or the Y value in the Feynman-α method, from a single measurement of reactor noise is discussed.

Posted Content
TL;DR: Confidence intervals for the Sliced Wasserstein distance are constructed which have finite-sample validity under no assumptions or under mild moment assumptions and are adaptive in length to the regularity of the underlying distributions.
Abstract: Motivated by the growing popularity of variants of the Wasserstein distance in statistics and machine learning, we study statistical inference for the Sliced Wasserstein distance--an easily computable variant of the Wasserstein distance. Specifically, we construct confidence intervals for the Sliced Wasserstein distance which have finite-sample validity under no assumptions or under mild moment assumptions. These intervals are adaptive in length to the regularity of the underlying distributions. We also bound the minimax risk of estimating the Sliced Wasserstein distance, and as a consequence establish that the lengths of our proposed confidence intervals are minimax optimal over appropriate distribution classes. To motivate the choice of these classes, we also study minimax rates of estimating a distribution under the Sliced Wasserstein distance. These theoretical findings are complemented with a simulation study demonstrating the deficiencies of the classical bootstrap, and the advantages of our proposed methods. We also show strong correspondences between our theoretical predictions and the adaptivity of our confidence interval lengths in simulations. We conclude by demonstrating the use of our confidence intervals in the setting of simulator-based likelihood-free inference. In this setting, contrasting popular approximate Bayesian computation methods, we develop uncertainty quantification methods with rigorous frequentist coverage guarantees.

Journal ArticleDOI
TL;DR: In this paper, the authors derived a numerically computable form of coverage probability for a cellular network model with BSs deployed according to a PPCP within the most fundamental setup, such as single-tier, Rayleigh fading, and nearest BS association.
Abstract: Poisson–Poisson cluster processes (PPCPs) are a class of point processes exhibiting attractive point patterns. Recently, PPCPs have been actively studied for modeling and analysis of heterogeneous cellular networks and device-to-device networks. However, to the best of the author’s knowledge, there is no exact derivation of downlink coverage probability in a numerically computable form for a cellular network model with base stations (BSs) deployed according to a PPCP within the most fundamental setup, such as single-tier, Rayleigh fading, and nearest BS association. In this letter, we consider such a fundamental model and derive a numerically computable form of coverage probability. To validate the analysis, we compare the results of numerical computations with those by Monte Carlo simulations and confirm good agreement.

Journal ArticleDOI
Ruihan Hu1, Qijun Huang1, Sheng Chang1, Hao Wang1, Jin He1 
TL;DR: In this paper, a margin-based Pareto deep ensemble pruning (MBPEP) model is proposed, which achieves the high quality uncertainty estimation with a small value of the prediction interval width (MPIW) and a high confidence of prediction interval coverage probability (PICP) by using deep ensemble networks.
Abstract: Machine learning algorithms have been effectively applied into various real world tasks. However, it is difficult to provide high-quality machine learning solutions to accommodate an unknown distribution of input datasets; this difficulty is called the uncertainty prediction problems. In this paper, a margin-based Pareto deep ensemble pruning (MBPEP) model is proposed. It achieves the high-quality uncertainty estimation with a small value of the prediction interval width (MPIW) and a high confidence of prediction interval coverage probability (PICP) by using deep ensemble networks. In addition to these networks, unique loss functions are proposed, and these functions make the sub-learners available for standard gradient descent learning. Furthermore, the margin criterion fine-tuning-based Pareto pruning method is introduced to optimize the ensembles. Several experiments including predicting uncertainties of classification and regression are conducted to analyze the performance of MBPEP. The experimental results show that MBPEP achieves a small interval width and a low learning error with an optimal number of ensembles. For the real-world problems, MBPEP performs well on input datasets with unknown distributions datasets incomings and improves learning performance on a multi task problem when compared to that of each single model.

Journal ArticleDOI
TL;DR: A thorough comparison between the two definitions of coverage shows that the definition introduced by Di Renzo et al. provides one with a tractable and closed-form approximation of the SINR-coverage, which is proved to be an upper-bound in relevant operating regimes.
Abstract: The coverage probability of cellular networks is usually defined as the probability that the signal-to-interference+noise-ratio (SINR) is greater than a reliability threshold. Based on this definition, the coverage probability cannot, in general, be formulated in a tractable closed-form expression. Di Renzo et al. have introduced a new definition of coverage that explicitly accounts for the cell association phase and that is proved to be analytically tractable for system-level optimization. In this letter, we conduct a thorough comparison between the two definitions of coverage. We show that the definition introduced by Di Renzo et al. provides one with a tractable and closed-form approximation of the SINR-coverage, which is proved to be an upper-bound in relevant operating regimes. We prove that the coverage probability monotonically: 1) increases with the density and the transmit power of base stations and 2) decreases with the density of mobile terminals and the transmission bandwidth.

Posted Content
TL;DR: In this paper, the authors presented the downlink coverage and rate analysis of a cellular vehicle-to-everything (C-V2X) communication network where the locations of vehicular nodes and road side units (RSUs) are modeled as Cox processes driven by a Poisson line process (PLP) and locations of cellular macro base stations (MBSs) were modeled as a 2D Poisson point process (PPP).
Abstract: In this paper, we present the downlink coverage and rate analysis of a cellular vehicle-to-everything (C-V2X) communication network where the locations of vehicular nodes and road side units (RSUs) are modeled as Cox processes driven by a Poisson line process (PLP) and the locations of cellular macro base stations (MBSs) are modeled as a 2D Poisson point process (PPP). Assuming a fixed selection bias and maximum average received power based association, we compute the probability with which a {\em typical receiver}, an arbitrarily chosen receiving node, connects to a vehicular node or an RSU and a cellular MBS. For this setup, we derive the signal-to-interference ratio (SIR)-based coverage probability of the typical receiver. One of the key challenges in the computation of coverage probability stems from the inclusion of shadowing effects. As the standard procedure of interpreting the shadowing effects as random displacement of the location of nodes is not directly applicable to the Cox process, we propose an approximation of the spatial model inspired by the asymptotic behavior of the Cox process. Using this asymptotic characterization, we derive the coverage probability in terms of the Laplace transform of interference power distribution. Further, we compute the downlink rate coverage of the typical receiver by characterizing the load on the serving vehicular nodes or RSUs and serving MBSs. We also provide several key design insights by studying the trends in the coverage probability and rate coverage as a function of network parameters. We observe that the improvement in rate coverage obtained by increasing the density of MBSs can be equivalently achieved by tuning the selection bias appropriately without the need to deploy additional MBSs.

Journal ArticleDOI
TL;DR: In this paper, the Engerer model is used as a decomposition model, then evaluated against in situ observations at three ground stations: Seoul, Buan, and Jeju ground stations.

Journal ArticleDOI
TL;DR: This letter provides an interference functional and Laplace transform based analysis using stochastic geometry to evaluate the expectation over the interference, which is further used to derive the coverage probability expressions for device-to-device (D2D) links.
Abstract: In this letter, we provide an interference functional and Laplace transform based analysis using stochastic geometry to evaluate the expectation over the interference, which is further used to derive the coverage probability expressions for device-to-device (D2D) links. We assume a more practically relevant Nakagami- $ {m}$ fading distribution to model fading between the D2D communication links considering interference from both D2D and cellular links. We also derive a bound on the coverage probability, which simplifies the coverage computations at higher values of the fading parameter. Furthermore, the numerical results corroborate the presented coverage analysis.

Journal ArticleDOI
22 Jul 2019-PeerJ
TL;DR: The results indicate that the Bayesian equitailed confidence interval based on the independent Jeffreys’ prior outperformed the other methods.
Abstract: Since rainfall data series often contain zero values and thus follow a delta-lognormal distribution, the coefficient of variation is often used to illustrate the dispersion of rainfall in a number of areas and so is an important tool in statistical inference for a rainfall data series. Therefore, the aim in this paper is to establish new confidence intervals for a single coefficient of variation for delta-lognormal distributions using Bayesian methods based on the independent Jeffreys', the Jeffreys' Rule, and the uniform priors compared with the fiducial generalized confidence interval. The Bayesian methods are constructed with either equitailed confidence intervals or the highest posterior density interval. The performance of the proposed confidence intervals was evaluated using coverage probabilities and expected lengths via Monte Carlo simulations. The results indicate that the Bayesian equitailed confidence interval based on the independent Jeffreys' prior outperformed the other methods. Rainfall data recorded in national parks in July 2015 and in precipitation stations in August 2018 in Nan province, Thailand are used to illustrate the efficacy of the proposed methods using a real-life dataset.

Posted Content
24 May 2019
TL;DR: The findings reveal that using directive beamforming for the aerial-BSs improves the downlink performance substantially since it alleviates the strong interference signals received from the aerial -BSs.
Abstract: In this paper, the downlink coverage probability and average achievable rate of an aerial user in a vertical HetNet (VHetNet) comprising aerial base stations (aerial-BSs) and terrestrial-BSs are analyzed. The locations of terrestrial-BSs are modeled as an infinite 2-D Poisson point process (PPP) while the locations of aerial-BSs are modeled as a finite 3-D Binomial point process (BPP) deployed at a particular height. We adopt cellular-to-air (C2A) channel model that incorporates LoS and NLoS transmissions between the terrestrial-BSs and the typical aerial user while we assume LoS transmissions for the air-toair (A2A) channels separating the aerial user and aerial-BSs. For tractability reasons, we simplify the expression of the LoS probability provided by the International Telecommunications Union using curve fitting. We assume that the aerial user is associated with the BS (either an aerial-BS or terrestrial-BS) that provides the strongest average received power. Using tools from stochastic geometry, we derive analytical expressions of the coverage probability and achievable rate in terms of the Laplace transform of interference power. To simplify the derived analytical expressions, we assume that the C2A links are in LoS conditions. Although this approximation gives pessimistic results compared to the exact performance, the analytical approximations are easier to evaluate and quantify well the performance at high heights of the aerial user. Our findings reveal that using directive beamforming for the aerial-BSs improves the downlink performance substantially since it alleviates the strong interference signals received from the aerial-BSs.

Journal ArticleDOI
TL;DR: A method to indicate and mitigate unrecognized biases: this work runs any pipeline with possibly unknown biases on both simulations and real data, and computes the coverage probability of posteriors, which measures whether posterior volume is a faithful representation of probability or not, which complies with objective Bayesian inference.
Abstract: When a posterior peaks in unexpected regions of parameter space, new physics has either been discovered, or a bias has not been identified yet. To tell these two cases apart is of paramount importance. We therefore present a method to indicate and mitigate unrecognized biases: Our method runs any pipeline with possibly unknown biases on both simulations and real data. It computes the coverage probability of posteriors, which measures whether posterior volume is a faithful representation of probability or not. If found to be necessary, the posterior is then corrected. This is a non-parametric debiasing procedure which complies with objective Bayesian inference. We use the method to debias inference with approximate covariance matrices and redshift uncertainties. We demonstrate why approximate covariance matrices bias physical constraints, and how this bias can be mitigated. We show that for a Euclid-like survey, if a traditional likelihood exists, then 25 end-to-end simulations suffice to guarantee that the figure of merit deteriorates maximally by 22 percent, or by 10 percent for 225 simulations. Thus, even a pessimistic analysis of Euclid-like data will still constitute an 25-fold increase in precision on the dark energy parameters in comparison to the state of the art (2018) set by KiDS and DES. We provide a public code of our method.

Journal ArticleDOI
TL;DR: This work presents a method to generate pseudo IPD from aggregate data using group mean, standard deviation, and sample sizes within each study, ie, the sufficient statistics, and demonstrates these methods in two empirical datasets in Alzheimer disease.
Abstract: The vast majority of meta-analyses uses summary/aggregate data retrieved from published studies in contrast to meta-analysis of individual participant data (IPD). When the outcome is continuous and IPD are available, linear mixed modelling methods can be employed in a one-stage approach. This allows for flexible modelling of within-study variability and between-study effects and accounts for the uncertainty in the estimates of between-study and within-study residual variances. However, IPD are seldom available. For the normal outcome case, we present a method to generate pseudo IPD from aggregate data using group mean, standard deviation, and sample sizes within each study, ie, the sufficient statistics. Analyzing the pseudo IPD with likelihood-based methods yields identical results as the analysis of the unknown true IPD. The advantage of this method is that we can employ the mixed modelling framework, implemented in many statistical software packages, and explore modelling options suitable for IPD, such as fixed study-specific intercepts and fixed treatment effect model, fixed study-specific intercepts and random treatment effects, and both random study and treatment effects and different options to model the within-study residual variance. This allows choosing the most realistic (or potentially complex) residual variance structures across studies, instead of using an overly simple structure. We demonstrate these methods in two empirical datasets in Alzheimer disease, where an extensive model, assuming all within-study variances to be free, fitted considerably better. In simulations, the pseudo IPD approach showed adequate coverage probability, because it accounted for small sample effects.

Journal ArticleDOI
TL;DR: This work establishes the asymptotic properties of the resulting bootstrap variance estimators for population totals and population quantiles using two pseudo-population bootstrap schemes and suggests that they perform well in terms of relative bias and coverage probability.
Abstract: The most common way to treat item nonresponse in surveys is to replace a missing value by a plausible value constructed on the basis of fully observed variables. Treating the imputed values as if they were observed may lead to invalid inferences. Bootstrap variance estimators for various finite population parameters are obtained using two pseudo-population bootstrap schemes. We establish the asymptotic properties of the resulting bootstrap variance estimators for population totals and population quantiles. A simulation study suggests that the methods perform well in terms of relative bias and coverage probability.

Journal ArticleDOI
Ruihan Hu1, Qijun Huang1, Sheng Chang1, Hao Wang1, Jin He1 
TL;DR: A margin-based Pareto deep ensemble pruning model is proposed, which achieves the high-quality uncertainty estimation with a small value of the prediction interval width (MPIW) and a high confidence of prediction interval coverage probability (PICP) by using deep ensemble networks.
Abstract: Machine learning algorithms have been effectively applied into various real world tasks. However, it is difficult to provide high-quality machine learning solutions to accommodate an unknown distribution of input datasets; this difficulty is called the uncertainty prediction problems. In this paper, a margin-based Pareto deep ensemble pruning (MBPEP) model is proposed. It achieves the high-quality uncertainty estimation with a small value of the prediction interval width (MPIW) and a high confidence of prediction interval coverage probability (PICP) by using deep ensemble networks. In addition to these networks, unique loss functions are proposed, and these functions make the sub-learners available for standard gradient descent learning. Furthermore, the margin criterion fine-tuning-based Pareto pruning method is introduced to optimize the ensembles. Several experiments including predicting uncertainties of classification and regression are conducted to analyze the performance of MBPEP. The experimental results show that MBPEP achieves a small interval width and a low learning error with an optimal number of ensembles. For the real-world problems, MBPEP performs well on input datasets with unknown distributions datasets incomings and improves learning performance on a multi task problem when compared to that of each single model.

Journal ArticleDOI
TL;DR: In this paper, the reliability of a multicomponent stress-strength system is obtained by the maximum likelihood (MLE) and Bayesian methods and the results are compared by using MCMC technique for both small and large samples.
Abstract: Purpose The purpose of this paper is to deal with the Bayesian and non-Bayesian estimation methods of multicomponent stress-strength reliability by assuming the Chen distribution. Design/methodology/approach The reliability of a multicomponent stress-strength system is obtained by the maximum likelihood (MLE) and Bayesian methods and the results are compared by using MCMC technique for both small and large samples. Findings The simulation study shows that Bayes estimates based on γ prior with absence of prior information performs little better than the MLE with regard to both biases and mean squared errors. The Bayes credible intervals for reliability are also shorter length with competitive coverage percentages than the condence intervals. Further, the coverage probability is quite close to the nominal value in all sets of parameters when both sample sizes n and m increases. Originality/value The lifetime distributions used in reliability analysis as exponential, γ, lognormal and Weibull only exhibit monotonically increasing, decreasing or constant hazard rates. However, in many applications in reliability and survival analysis, the most realistic hazard rate is bathtub-shaped found in the Chen distribution. Therefore, the authors have studied the multicomponent stress-strength reliability under the Chen distribution by comparing the MLE and Bayes estimators.