scispace - formally typeset
Search or ask a question

Showing papers on "Coverage probability published in 2023"



Journal ArticleDOI
TL;DR: In this paper , a meta distribution-based analytical framework for UAV-assisted cellular networks is provided, in which the probabilistic line-of-sight channel and realistic antenna pattern are taken into account for air-to-ground transmissions.
Abstract: Mounting compact and lightweight base stations on unmanned aerial vehicles (UAVs) is a cost-effective and flexible solution to provide seamless coverage on the existing terrestrial networks. While the coverage probability in UAV-assisted cellular networks has been widely investigated, it provides only the first-order statistic of signal-to-interference-plus-noise ratio (SINR). In this paper, to analyze high-order statistics of SINR and characterize the disparity among individual links, we provide a meta distribution (MD)-based analytical framework for UAV-assisted cellular networks, in which the probabilistic line-of-sight channel and realistic antenna pattern are taken into account for air-to-ground transmissions. To accurately characterize the interference from UAVs, we relax the widely applied uniform off-boresight angle (OBA) assumption and derive the exact distribution of OBA. Using stochastic geometry, for both steerable and vertical antenna scenarios, we obtain mathematical expressions for the moments of condition success probability, the SINR MD, and the mean local delay. Moreover, we study the asymptotic behavior of the moments as network density approaches infinity. Numerical results validate the tightness of the theoretical results and show that the uniform OBA assumption underestimates the network performance, especially in the regime of moderate altitude of UAV. We also show that when UAVs are equipped with steerable antennas, the network coverage and user fairness can be optimized simultaneously by carefully adjusting the UAV parameters.

3 citations


Journal ArticleDOI
01 Jan 2023
TL;DR: In this paper , an analytical approach to the coverage probability analysis of UAV-assisted cellular networks with imperfect beam alignment has been proposed, where all users are distributed according to Poisson cluster process (PCP) around base stations, in particular, Thomas Cluster Process (TCP).
Abstract: With the rapid development of emerging 5G and beyond (B5G), Unmanned Aerial Vehicles (UAVs) are increasingly important to improve the performance of dense cellular networks. As a conventional metric, coverage probability has been widely studied in communication systems due to the increasing density of users and complexity of the heterogeneous environment. In recent years, stochastic geometry has attracted more attention as a mathematical tool for modeling mobile network systems. In this paper, an analytical approach to the coverage probability analysis of UAV-assisted cellular networks with imperfect beam alignment has been proposed. An assumption was considered that all users are distributed according to Poisson Cluster Process (PCP) around base stations, in particular, Thomas Cluster Process (TCP). Using this model, the impact of beam alignment errors on the coverage probability was investigated. Initially, the Probability Density Function (PDF) of directional antenna gain between the user and its serving base station was obtained. Then, association probability with each tier was achieved. A tractable expression was derived for coverage probability in both Line-of-Sight (LoS) and Non-Line-of-Sight (NLoS) condition links. Numerical results demonstrated that at low UAVs altitude, beam alignment errors significantly degrade coverage performance. Moreover, for a small cluster size, alignment errors do not necessarily affect the coverage performance.

2 citations


Journal ArticleDOI
TL;DR: In this paper , the authors used Bayesian methods to summarize the appropriate quantiles (e.g., 2.5th and 97.5ths) of the marginal distribution of individuals across studies and construct a credible interval describing the estimation uncertainty in the lower and upper limits of the reference interval.
Abstract: Reference intervals, or reference ranges, aid medical decision‐making by containing a pre‐specified proportion (e.g., 95%) of the measurements in a representative healthy population. We recently proposed three approaches for estimating a reference interval from a meta‐analysis based on a random effects model: a frequentist approach, a Bayesian posterior predictive interval, and an empirical approach. Because the Bayesian posterior predictive interval becomes wider to incorporate estimation uncertainty, it may systematically contain greater than 95% of measurements when the number of studies is small or the between study heterogeneity is large. The frequentist and empirical approaches also captured a median of less than 95% of measurements in this setting, and 95% confidence or credible intervals for the reference interval limits were not developed. In this update, we describe how one can instead use Bayesian methods to summarize the appropriate quantiles (e.g., 2.5th and 97.5th) of the marginal distribution of individuals across studies and construct a credible interval describing the estimation uncertainty in the lower and upper limits of the reference interval. We demonstrate through simulations that this method performs well in capturing 95% of values from the marginal distribution and maintains a median coverage of near 95% of the marginal distribution even when the number of studies is small, or the between‐study heterogeneity is large. We also compare the results of this method to those obtained from the three previously proposed methods in the original case study of the meta‐analysis of frontal subjective postural vertical measurements.

1 citations


Journal ArticleDOI
01 Jun 2023-Symmetry
TL;DR: In this paper , a Markov Chain Monte Carlo approach using Gibbs sampling was designed to derive the Bayesian estimate of δ, based on the mean square error, bias, confidence interval length, and coverage probability, the results of numerical analysis of the performance of the maximum likelihood and Bayesian estimates using Monte Carlo simulations were quite satisfactory.
Abstract: Based on independent progressive type-II censored samples from two-parameter Burr-type XII distributions, various point and interval estimators of δ=P(Y

1 citations


Journal ArticleDOI
TL;DR: In this article , a unified framework of the test-and-pool approach to general parameter estimation by combining gold-standard probability and non-probability samples was developed for finite-population inference.
Abstract: Multiple heterogeneous data sources are becoming increasingly available for statistical analyses in the era of big data. As an important example in finite-population inference, we develop a unified framework of the test-and-pool approach to general parameter estimation by combining gold-standard probability and non-probability samples. We focus on the case when the study variable is observed in both datasets for estimating the target parameters, and each contains other auxiliary variables. Utilizing the probability design, we conduct a pretest procedure to determine the comparability of the non-probability data with the probability data and decide whether or not to leverage the non-probability data in a pooled analysis. When the probability and non-probability data are comparable, our approach combines both data for efficient estimation. Otherwise, we retain only the probability data for estimation. We also characterize the asymptotic distribution of the proposed test-and-pool estimator under a local alternative and provide a data-adaptive procedure to select the critical tuning parameters that target the smallest mean square error of the test-and-pool estimator. Lastly, to deal with the non-regularity of the test-and-pool estimator, we construct a robust confidence interval that has a good finite-sample coverage property.

1 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed a conformal quantile regression to calibrate the bounds of the prediction interval to ensure that its coverage rate is as close as nominal confidence, and the proposed method surpasses the benchmarks by providing narrower prediction intervals with more accurate empirical coverage probability.

1 citations


Posted ContentDOI
09 Jun 2023
TL;DR: The authors proposed a distribution-free method to obtain confidence intervals with a theoretically established guarantee on coverage, using split conformal prediction to obtain calibration subsets for each data subgroup, leading to equalized coverage.
Abstract: Several uncertainty estimation methods have been recently proposed for machine translation evaluation. While these methods can provide a useful indication of when not to trust model predictions, we show in this paper that the majority of them tend to underestimate model uncertainty, and as a result they often produce misleading confidence intervals that do not cover the ground truth. We propose as an alternative the use of conformal prediction, a distribution-free method to obtain confidence intervals with a theoretically established guarantee on coverage. First, we demonstrate that split conformal prediction can ``correct'' the confidence intervals of previous methods to yield a desired coverage level. Then, we highlight biases in estimated confidence intervals, both in terms of the translation language pairs and the quality of translations. We apply conditional conformal prediction techniques to obtain calibration subsets for each data subgroup, leading to equalized coverage.

Journal ArticleDOI
TL;DR: In this article , the reliability analysis for a multicomponent stress-strength (MSS) model is discussed, and generalized estimates for MSS reliability based on constructed pivotal quantities, and associated Monte-Carlo sampling is provided for computation.
Abstract: Reliability analysis for a multicomponent stress-strength (MSS) model is discussed in this paper. When strength and stress variables follow generalized inverted exponential distributions (GIEDs) with common scale parameters, maximum likelihood estimate of MSS reliability is established along with associated existence and uniqueness, and approximate confidence interval is also obtained in consequence. Additionally, alternative generalized estimates are proposed for MSS reliability based on constructed pivotal quantities, and associated Monte-Carlo sampling is provided for computation. Further, classical and generalized estimates are also established under unequal strength and stress parameter case. For comparison, bootstrap confidence intervals are also provided under different cases. To compare the equivalence of the strength and stress parameters, likelihood ratio testing is presented as a complement. Finally, extensive simulation studies are carried out to assess the performance of the proposed methods, and a real data example is presented for application. The numerical results indicate that the proposed generalized methods perform better than conventional likelihood results.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed the interval estimator for u, which is the lower mean ratio u. By using jackknife empirical likelihood (JEL), adjusted JEL, mean JEL empirical likelihood, mean MJEL, adjusted MJEL and adjusted mean jackknife EML, they made a comparison of these methods in terms of coverage probability and average confidence interval length, and the simulation results indicate that MAJEL performs the best among these methods for small sample sizes of the skewed distribution.
Abstract: Measuring economic inequality is an important topic to explore the social system. The Gini index and Pietra ratio are used by many people but are limited to reflecting the sampling distribution. In this paper, we study the interval estimates with another measure called the lower mean ratio u. By using jackknife empirical likelihood (JEL), adjusted jackknife empirical likelihood (AJEL), mean jackknife empirical likelihood (MJEL), mean adjusted jackknife empirical likelihood (MAJEL), and adjusted mean jackknife empirical likelihood methods, we propose the interval estimator for u. In the simulation study, we make a comparison of these methods in terms of coverage probability and average confidence interval length. The simulation results indicate that MAJEL performs the best among these methods for small sample sizes of the skewed distribution. For a small sample size of normal distribution, both JEL and MJEL show better performance than the other methods but MJEL is relatively time-consuming. Finally, two real data sets are analysed to illustrate the proposed methods.

Journal ArticleDOI
TL;DR: The least squares estimator of the autoregressive coefficient in the Bifurcating Autoregressive (BAR) model was recently shown to suffer from substantial bias, especially for small to moderate samples as mentioned in this paper .
Abstract: The least squares (LS) estimator of the autoregressive coefficient in the bifurcating autoregressive (BAR) model was recently shown to suffer from substantial bias, especially for small to moderate samples. This study investigates the impact of the bias in the LS estimator on the behavior of various types of bootstrap confidence intervals for the autoregressive coefficient and introduces methods for constructing bias-corrected bootstrap confidence intervals. We first describe several bootstrap confidence interval procedures for the autoregressive coefficient of the BAR model and present their bias-corrected versions. The behavior of uncorrected and corrected confidence interval procedures is studied empirically through extensive Monte Carlo simulations and two real cell lineage data applications. The empirical results show that the bias in the LS estimator can have a significant negative impact on the behavior of bootstrap confidence intervals and that bias correction can significantly improve the performance of bootstrap confidence intervals in terms of coverage, width, and symmetry.

Journal ArticleDOI
TL;DR: In this article , the authors considered the Bayesian estimators under reference and Jeffery's priors and maximum likelihood estimators to estimate the unknown parameters of the process capability indices Spmk, Spmkc, and Cs for Frechet distribution.
Abstract: In this paper, we considered the Bayesian estimators under reference and Jeffery's priors and maximum likelihood estimators to estimate the unknown parameters of the process capability indices Spmk, Spmkc, and Cs for Frechet distribution. Further, we developed bootstrap confidence intervals for aforementioned process capability indices based on above-mentioned estimators. Monte Carlo simulations are performed to investigate the performance of process capability indices through skewness, kurtosis, mean square error and widths of bootstrap confidence intervals for small, moderate, and large sample sizes. Simulations results indicate that the Bayesian estimator under reference prior outperforms even in small sample sizes, and all performed equally well for larger sample sizes. Moreover, the average width for bootstrap confidence interval for Cs is least in all. Finally, real data is analyzed for illustration purposes.

Journal ArticleDOI
TL;DR: In this paper , the authors considered an RIS-assisted cellular-based RFpowered IoT network, where the cellular base stations (BSs) broadcast energy signal to IoT devices for energy harvesting (EH) in the charging stage, which is utilized to support the uplink (UL) transmissions in the subsequent UL stage.
Abstract: Emerged as a promising solution for future wireless communication systems, intelligent reflecting surface (IRS) is capable of reconfiguring the wireless propagation environment by adjusting the phase-shift of a large number of reflecting elements. To quantify the gain achieved by IRSs in the radio frequency (RF) powered Internet of Things (IoT) networks, in this work, we consider an IRS-assisted cellular-based RFpowered IoT network, where the cellular base stations (BSs) broadcast energy signal to IoT devices for energy harvesting (EH) in the charging stage, which is utilized to support the uplink (UL) transmissions in the subsequent UL stage. With tools from stochastic geometry, we first derive the distributions of the average signal power and interference power which are then used to obtain the energy coverage probability, UL coverage probability, overall coverage probability, spatial throughput and power efficiency, respectively. With the proposed analytical framework, we finally evaluate the effect on network performance of key system parameters, such as IRS density, IRS reflecting element number, charging stage ratio, etc. Compared with the conventional RF-powered IoT network, IRS passive beamforming brings the same level of enhancement in both energy coverage and UL coverage, leading to the unchanged optimal charging stage ratio when maximizing spatial throughput.

Journal ArticleDOI
TL;DR: In this paper , an improved method to arrange the light emitting diodes (LED) on the ceiling for an indoor visible light communication (VLC) system is proposed to find the optimal location of the LEDs by minimizing the outage probability at the user's location.
Abstract: In this paper, an improved method to arrange the light emitting diodes (LED) on the ceiling for an indoor visible light communication (VLC) system is proposed. More precisely, a LiFi-based connection is considered where several LEDs are placed on the ceiling of an office and communicate with several receiver (user) located randomly within the room area. It is proposed to find the optimal location of the LEDs by minimizing the outage probability at the user's location. Both cases of static and mobile users are addressed. First, the closed-form expression for the outage probability at the user's location is derived. Then, by minimizing the average outage probability, the optimal location of the LEDs is found. The numerical results show that the proposed LED arrangement outperforms the classically used uniform LED arrangement in terms of average outage probability at the receiver and reduces the necessary signal-to-noise ratio (SNR) to achieve a target average outage probability by about 3 and 2.2 dB for mobile and static users, respectively. They also show that the performance superiority of the proposed method over classical arrangement remains valid even under large estimation errors.

Journal ArticleDOI
TL;DR: In this paper , the locations of satellites and users were modeled using Poisson point processes on the surfaces of concentric spheres, and the coverage probability of a typical downlink user was derived as a function of relevant parameters, including path-loss exponent, satellite height, density, and Nakagami fading parameter.
Abstract: Satellite networks are promising to provide ubiquitous and high-capacity global wireless connectivity. Traditionally, satellite networks are modeled by placing satellites on a grid of multiple circular orbit geometries. Such a network model, however, requires intricate system-level simulations to evaluate coverage performance, and analytical understanding of the satellite network is limited. Continuing the success of stochastic geometry in a tractable analysis for terrestrial networks, in this paper, we develop novel models that are tractable for the coverage analysis of satellite networks using stochastic geometry. By modeling the locations of satellites and users using Poisson point processes on the surfaces of concentric spheres, we characterize analytical expressions for the coverage probability of a typical downlink user as a function of relevant parameters, including path-loss exponent, satellite height, density, and Nakagami fading parameter. Then, we also derive a tight lower bound of the coverage probability in tractable expression while keeping full generality. Leveraging the derived expression, we identify the optimal density of satellites in terms of the height and the path-loss exponent. Our key finding is that the optimal average number of satellites decreases logarithmically with the satellite height to maximize the coverage performance. Simulation results verify the exactness of the derived expressions.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a new approach of cell-sweeping-based base stations (BSs) deployments in cellular Radio Access Networks (RANs) where the coverage is improved by enhancing the cell-edge performance.
Abstract: Adequate and uniform network coverage provision is one of the main objectives of cellular service providers. Additionally, the densification of cells exacerbates coverage and service provision challenges, particularly at the cell-edges. In this paper, we present a new approach of cell-sweeping-based Base Stations (BSs) deployments in cellular Radio Access Networks (RANs) where the coverage is improved by enhancing the cell-edge performance. In essence, the concept of cell-sweeping rotates/sweeps the sectors of a site in azimuth continuously/discretely resulting in near-uniform distribution of the signal-to-interference-plus-noise ratio (SINR) around the sweeping site. This paper investigates the proposed concept analytically by deriving expressions for the PDF/CDF of SINR and achievable rate; and with the help of system-level simulations, it shows that the proposed concept can provide throughput gains of up to 125% at the cell-edge. Then, using a link-budget analysis, it is shown that the maximum allowable path loss (MAPL) increases by 2.1 dB to 4.1 dB corresponding to the gains in wideband SINR and post-equalized SINR, respectively. This increase in MAPL can be translated to cell-radius/area with the help of the Okumura-Hata propagation model and results in cell-coverage area enhancement by 30% to 66% in a Typical Urban cell deployment scenario.

Journal ArticleDOI
TL;DR: In this article , a wind power probability prediction method based on the quantile regression of a dilated causal convolutional neural network is proposed to obtain more useful information than conventional point and interval predictions, and a prediction of the future complete probability distribution of wind power can be realized.
Abstract: Aiming at the wind power prediction problem, a wind power probability prediction method based on the quantile regression of a dilated causal convolutional neural network is proposed. With the developed model, the Adam stochastic gradient descent technique is utilized to solve the cavity parameters of the causal convolutional neural network under different quantile conditions and obtain the probability density distribution of wind power at various times within the following 200 hours. The presented method can obtain more useful information than conventional point and interval predictions. Moreover, a prediction of the future complete probability distribution of wind power can be realized. According to the actual data forecast of wind power in the PJM network in the United States, the proposed probability density prediction approach can not only obtain more accurate point prediction results, it also obtains the complete probability density curve prediction results for wind power. Compared with two other quantile regression methods, the developed technique can achieve a higher accuracy and smaller prediction interval range under the same confidence level.

Journal ArticleDOI
22 Jun 2023-PeerJ
TL;DR: In this paper , the authors constructed estimates of the confidence interval for the common mean of several Weibull distributions using the Bayesian equitailed confidence interval and the highest posterior density interval using the gamma prior.
Abstract: The Weibull distribution has been used to analyze data from many fields, including engineering, survival and lifetime analysis, and weather forecasting, particularly wind speed data. It is useful to measure the central tendency of wind speed data in specific locations using statistical parameters for instance the mean to accurately forecast the severity of future catastrophic events. In particular, the common mean of several independent wind speed samples collected from different locations is a useful statistic. To explore wind speed data from several areas in Surat Thani province, a large province in southern Thailand, we constructed estimates of the confidence interval for the common mean of several Weibull distributions using the Bayesian equitailed confidence interval and the highest posterior density interval using the gamma prior. Their performances are compared with those of the generalized confidence interval and the adjusted method of variance estimates recovery based on their coverage probabilities and expected lengths. The results demonstrate that when the common mean is small and the sample size is large, the Bayesian highest posterior density interval performed the best since its coverage probabilities were higher than the nominal confidence level and it provided the shortest expected lengths. Moreover, the generalized confidence interval performed well in some scenarios whereas adjusted method of variance estimates recovery did not. The approaches were used to estimate the common mean of real wind speed datasets from several areas in Surat Thani province, Thailand, fitted to Weibull distributions. These results support the simulation results in that the Bayesian methods performed the best. Hence, the Bayesian highest posterior density interval is the most appropriate method for establishing the confidence interval for the common mean of several Weibull distributions.

Posted ContentDOI
04 Jul 2023
TL;DR: In this article , the authors provided a more fine grained analysis on LEO satellite networks modeled by a homogeneous Poisson point process (HPPP) and studied the distribution and moments of the conditional coverage probability given the point process.
Abstract: Recently, stochastic geometry has been applied to provide tractable performance analysis for low earth orbit (LEO) satellite networks. However, existing works mainly focus on analyzing the ``coverage probability'', which provides limited information. To provide more insights, this paper provides a more fine grained analysis on LEO satellite networks modeled by a homogeneous Poisson point process (HPPP). Specifically, the distribution and moments of the conditional coverage probability given the point process are studied. The developed analytical results can provide characterizations on LEO satellite networks, which are not available in existing literature, such as ``user fairness'' and ``what fraction of users can achieve a given transmission reliability ''. Simulation results are provided to verify the developed analysis. Numerical results show that, in a dense satellite network, {\color{black}it is} beneficial to deploy satellites at low altitude, for the sake of both coverage probability and user fairness.

Journal ArticleDOI
TL;DR: In this paper , the authors analyzed the downlink, uplink, and joint downlink&uplink exposure induced by the radiation from BSs and personal user equipment (UE), respectively, in terms of the received power density and exposure index.
Abstract: Installing more base stations (BSs) into the existing cellular infrastructure is an essential way to provide greater network capacity and higher data rate in the 5th-generation cellular networks (5G). However, a non-negligible amount of population is concerned that such network densification will generate a notable increase in exposure to electric and magnetic fields (EMF) over the territory. In this paper, we analyze the downlink, uplink, and joint downlink&uplink exposure induced by the radiation from BSs and personal user equipment (UE), respectively, in terms of the received power density and exposure index. In our analysis, we consider the EMF restrictions set by the regulatory authorities such as the minimum distance between restricted areas (e.g., schools and hospitals) and BSs, and the maximum permitted exposure. Exploiting tools from stochastic geometry, mathematical expressions for the coverage probability and statistical EMF exposure are derived and validated. Tuning the system parameters such as the BS density and the minimum distance from a BS to restricted areas, we show a trade-off between reducing the population's exposure to EMF and enhancing the network coverage performance. Then, we formulate optimization problems to maximize the performance of the EMF-aware cellular network while ensuring that the EMF exposure complies with the standard regulation limits with high probability. For instance, the exposure from BSs is two orders of magnitude less than the maximum permissible level when the density of BSs is less than 20 BSs/km2.

Journal ArticleDOI
TL;DR: In this paper , a confidence interval centred on a bootstrap smoothed estimator was proposed, with width proportional to an estimator of Efron's delta method approximation to the standard deviation of this estimator.
Abstract: Frequentist confidence intervals that include some element of data-based model selection or model averaging is an active area of research. Assessments of the performance, in terms of coverage and expected length, of such intervals yield few positive results. Efron, JASA 2014, proposed a confidence interval centred on a bootstrap smoothed estimator, with width proportional to an estimator of Efron’s delta method approximation to the standard deviation of this estimator. Recently, Kabaila and Wijethunga assessed the performance of this confidence interval using a testbed consisting of two nested linear regression models, with error variance assumed known. This interval was shown to have far better coverage properties than the corresponding post-model-selection confidence interval. However, its expected length properties were not as good as had been hoped for. For this testbed, we ask the following question. Does there exist a formula for the data-based width of a confidence interval centred on the bootstrap smoothed estimator so that it has good performance in terms of both coverage and expected length? Using a decision-theoretic performance bound we answer this question in the negative.

Proceedings ArticleDOI
01 Mar 2023
TL;DR: In this article , a RIS-assisted high-speed train (HST) communication system is considered to improve the coverage probability, and the closed-form expression of coverage probability is derived.
Abstract: Reconfigurable intelligent surface (RIS) has received increasing attention due to its capability of extending cell coverage by reflecting signals toward receivers. This paper considers a RIS-assisted high-speed train (HST) communication system to improve coverage probability. We derive the closed-form expression of coverage probability. Moreover, we analyze impacts of some key system parameters, including transmission power, signal-to-noise ratio threshold, and horizontal distance between base station and RIS. Simulation results verify the efficiency of RIS-assisted HST communications in terms of coverage probability.

Journal ArticleDOI
TL;DR: In this paper , the downlink rate meta distribution of a typical UAV under base station (BS) cooperation in a cellular-connected UAV network is studied. And the impacts of relevant parameters on the rate meta distributions are investigated.
Abstract: This letter studies the downlink rate meta distribution of a typical UAV under base station (BS) cooperation in a cellular-connected UAV network. An analytical model is derived for the meta distribution of the downlink transmission rate of a typical UAV by using a standard beta distribution approximation, taking into account the LoS probability and Nakagami-m fading of a downlink. Based on the derived model, the impacts of relevant parameters on the rate meta distribution are investigated. The derived analytical model can be used to provide a reference for the setting of relevant parameters in the design of a BS cooperation strategy.

Journal ArticleDOI
TL;DR: In this paper , the authors compared seven confidence interval estimation methods, namely, Wald, AgrestiCoull add z2, Wilson Score, Clopper-Pearson, Mid-p, and Jefferys, for a single proportion with different event incidences and precisions.
Abstract: OBJECTIVE To compare different methods for calculating sample size based on confidence interval estimation for a single proportion with different event incidences and precisions. METHODS We compared 7 methods, namely Wald, AgrestiCoull add z2, Agresti-Coull add 4, Wilson Score, Clopper-Pearson, Mid-p, and Jefferys, for confidence interval estimation for a single proportion. The sample size was calculated using the search method with different parameter settings (proportion of specified events and half width of the confidence interval [ω=0.05, 0.1]). With Monte Carlo simulation, the estimated sample size was used to simulate and compare the width of the confidence interval, the coverage of the confidence interval and the ratio of the noncoverage probability. RESULTS For a high accuracy requirement (ω =0.05), the Mid-p method and Clopper Pearson method performed better when the incidence of events was low (P < 0.15). In other settings, the performance of the 7 methods did not differ significantly except for a poor symmetry of the Wald method. In the setting of ω=0.1 with a very low p (0.01-0.05), failure of iteration occurred with nearly all the methods except for the Clopper-Pearson method. CONCLUSION Different sample size determination methods based on confidence interval estimation should be selected for single proportions with different parameter settings.

Posted ContentDOI
28 Feb 2023
TL;DR: In this article , the authors introduce Joint Coverage Regions (JCRs), which unify confidence intervals and prediction regions in frequentist statistics, and demonstrate the use of JCRs in statistical problems such as constructing efficient prediction sets when the parameter space is structured.
Abstract: We introduce Joint Coverage Regions (JCRs), which unify confidence intervals and prediction regions in frequentist statistics. Specifically, joint coverage regions aim to cover a pair formed by an unknown fixed parameter (such as the mean of a distribution), and an unobserved random datapoint (such as the outcomes associated to a new test datapoint). The first corresponds to a confidence component, while the second corresponds to a prediction part. In particular, our notion unifies classical statistical methods such as the Wald confidence interval with distribution-free prediction methods such as conformal prediction. We show how to construct finite-sample valid JCRs when a conditional pivot is available; under the same conditions where exact finite-sample confidence and prediction sets are known to exist. We further develop efficient JCR algorithms, including split-data versions by introducing adequate sets to reduce the cost of repeated computation. We illustrate the use of JCRs in statistical problems such as constructing efficient prediction sets when the parameter space is structured.

Journal ArticleDOI
TL;DR: In this article , the authors established a sub-6GHz and mmWave hybrid heterogeneous cellular network based on the modified Poisson hole process (MPHP) model and derived the coverage probability by using the interference calculation method of integrating the nearest sector exclusion area.

Journal ArticleDOI
TL;DR: In this paper , the supplier's selection problems by using the generalized confidence interval (GCI) of difference between two process capability indices δ′ have been considered, and three real data sets have been reanalyzed to illustrate the methodology of the supplier selection problem by utilizing the GCI method.
Abstract: In process capability indices, the generalized confidence interval (GCI) method has been used many times in several research articles. In this article, we have considered the supplier’s selection problems by using the GCI of difference between two process capability indices δ′. We have considered two classical methods of estimation, viz. maximum likelihood estimation (MLE), and maximum product spacing estimation (MPSE) to estimate δ′. By using Monte Carlo simulation, we have obtained the biases and corresponding mean squared errors (MSEs) of δ′. Here, we also find the lower confidence limit (L), upper confidence limit (U), and their corresponding average width (AW) for both classical methods MLE and MPSE. Three real data sets have been reanalyzed to illustrate the methodology of the supplier’s selection problem by utilizing the generalized confidence interval method.

Journal ArticleDOI
24 Feb 2023-PLOS ONE
TL;DR: In this article , the authors describe and compare confidence interval estimation methods for the standardized contrasts of treatment effects in ANCOVA designs and present sample size procedures to assure that the resulting confidence intervals yield informative estimation with adequate precision.
Abstract: Standardized effect sizes and confidence intervals are useful statistical assessments for comparing results across different studies when measurement units are not directly comparable. This paper aims to describe and compare confidence interval estimation methods for the standardized contrasts of treatment effects in ANCOVA designs. Sample size procedures are also presented to assure that the resulting confidence intervals yield informative estimation with adequate precision. Exact interval estimation approach has theoretical and empirical advantages in coverage probability and interval width over the approximate interval procedures. Numerical investigations of the existing method reveal that the omission of covariate variables has a negative impact on sample size calculations for precise interval estimation, especially when there is disparity in influential covariate variables. The proposed approaches and developed computer programs fully utilize covariate properties in interval estimation and provide accurate sample size determinations under the precision considerations of the expected interval width and the assurance probability of interval width.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a new method of finding confidence intervals that are often narrower than traditional confidence intervals for any individual parameter in a linear model if the errors are from a skewed distribution or from a long-tailed symmetric distribution.
Abstract: In stating the results of their research, scientists usually want to publish narrow confidence intervals because they give precise estimates of the effects of interest. In many cases, the researcher would want to use the narrowest interval that maintains the desired coverage probability. In this manuscript, we propose a new method of finding confidence intervals that are often narrower than traditional confidence intervals for any individual parameter in a linear model if the errors are from a skewed distribution or from a long‐tailed symmetric distribution. If the errors are normally distributed, we show that the width of the proposed normal scores confidence interval will not be much greater than the width of the traditional interval. If the dataset includes predictor variables that are uncorrelated or moderately correlated then the confidence intervals will maintain their coverage probability. However, if the covariates are highly correlated, then the coverage probability of the proposed confidence interval may be slightly lower than the nominal value. The procedure is not computationally intensive and an R program is available to determine the normal scores 95% confidence interval. Whenever the covariates are not highly correlated, the normal scores confidence interval is recommended for the analysis of datasets having 50 or more observations.

Journal ArticleDOI
TL;DR: In this article , the authors compared the performance of three nonparametric bootstrap confidence intervals (BCIs) for C p c $C_{pc}$ , i.e., the standard bootstrap, the percentile bootstrap and bias-corrected percentile bootstrap.
Abstract: We consider the process capability index (PCI), a widely used quality-related statistic used to assess the quality of products and performance of monitored processes in various industries. It is widely known that the conventional PCIs perform well when the quality process being monitored has a normal distribution. Unfortunately, using the indices to evaluate a non-normally distributed process often leads to inaccurate results. In this article, we consider a new PCI, C p c $C_{pc}$ , that can be used in both normal and non-normal scenarios. The objective of this article is threefold: (i) We provide a corrected form of the confidence interval for C p c $C_{pc}$ . (ii) We compare the performance of three nonparametric bootstrap confidence intervals (BCIs) for C p c $C_{pc}$ . Specifically, the standard bootstrap, percentile bootstrap, and bias-corrected percentile bootstrap. Under various distributional assumptions such as the normal, chi-square, Student t, Laplace, and two-parameter exponential distributions, the estimated coverage probabilities and average width of the confidence intervals and BCIs for C p c $C_{pc}$ are compared. (iii) The power of the respective bootstrap approaches is evaluated by using the equivalence relation between confidence interval construction and two-sided hypothesis testing. We also provide the receiver operating characteristic curves to evaluate their performance. Finally, as an illustrative example, an actual data set related to groove dimensions (in inches) measured from a manufacturing process of ignition keys is re-analyzed to illustrate the utility of the proposed methods.