scispace - formally typeset
Search or ask a question
Topic

Coverage probability

About: Coverage probability is a research topic. Over the lifetime, 2479 publications have been published within this topic receiving 53259 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper comprehensively evaluates the impacts of essential factors on the coverage probability, such as the coefficient of fading channel, the number and size of the element, and the angle of incidence, which can be helpful in the performance evaluation and practical implementation of the IRS.
Abstract: The intelligent reflective surface (IRS) technology has received many interests in recent years, thanks to its potential uses in future wireless communications, in which one of the promising use cases is to widen coverage, especially in the line-of-sight-blocked scenarios. Therefore, it is critical to analyze the corresponding coverage probability of IRS-aided communication systems. To our best knowledge, however, previous works focusing on this issue are very limited. In this paper, we analyze the coverage probability under the Rayleigh fading channel, taking the number and size of the array elements into consideration. We first derive the exact closed-form of coverage probability for the unit element. Afterward, with the method of moment matching, the approximation of the coverage probability can be formulated as the ratio of upper incomplete Gamma function and Gamma function, allowing an arbitrary number of elements. Finally, we comprehensively evaluate the impacts of essential factors on the coverage probability, such as the coefficient of fading channel, the number and size of the element, and the angle of incidence. Overall, the paper provides a succinct and general expression of coverage probability, which can be helpful in the performance evaluation and practical implementation of the IRS.

15 citations

Journal ArticleDOI
TL;DR: This article investigates a new target detection model, referred to as the certainty-based target detection (as compared to the traditional uncertainty- based target detection) to facilitate the formulation of the visual coverage problem and derives the closed-form solution for the estimation of theVisual coverage probability based on this new target Detection model that takes visual occlusion into account.
Abstract: Coverage estimation is one of the fundamental problems in sensor networks. Coverage estimation in visual sensor networks (VSNs) is more challenging than in conventional 1-D (omnidirectional) scalar sensor networks (SSNs) because of the directional sensing nature of cameras and the existence of visual occlusion in crowded environments. This article represents a first attempt toward a closed-form solution for the visual coverage estimation problem in the presence of occlusions. We investigate a new target detection model, referred to as the certainty-based target detection (as compared to the traditional uncertainty-based target detection) to facilitate the formulation of the visual coverage problem. We then derive the closed-form solution for the estimation of the visual coverage probability based on this new target detection model that takes visual occlusions into account. According to the coverage estimation model, we further propose an estimate of the minimum sensor density that suffices to ensure a visual K-coverage in a crowded sensing field. Simulation is conducted which shows extreme consistency with results from theoretical formulation, especially when the boundary effect is considered. Thus, the closed-form solution for visual coverage estimation is effective when applied to real scenarios, such as efficient sensor deployment and optimal sleep scheduling.

14 citations

Posted Content
TL;DR: In this article, a generalized linear regression analysis with compositional covariates is proposed, where a group of linear constraints on regression coefficients are imposed to account for the compositional nature of the data and to achieve subcompositional coherence.
Abstract: Motivated by regression analysis for microbiome compositional data, this paper considers generalized linear regression analysis with compositional covariates, where a group of linear constraints on regression coefficients are imposed to account for the compositional nature of the data and to achieve subcompositional coherence. A penalized likelihood estimation procedure using a generalized accelerated proximal gradient method is developed to efficiently estimate the regression coefficients. A de-biased procedure is developed to obtain asymptotically unbiased and normally distributed estimates, which leads to valid confidence intervals of the regression coefficients. Simulations results show the correctness of the coverage probability of the confidence intervals and smaller variances of the estimates when the appropriate linear constraints are imposed. The methods are illustrated by a microbiome study in order to identify bacterial species that are associated with inflammatory bowel disease (IBD) and to predict IBD using fecal microbiome.

14 citations

Journal ArticleDOI
28 Jul 2017
TL;DR: In this paper, the authors proposed confidence intervals for a single mean and difference of two means of normal distributions with unknown coefficients of variation (CVs), which were compared with existing confidence interval for the single normal mean based on the Student's t-distribution (small sample size case) and the z-distigmoid distribution (large sample size) using Monte Carlo simulation.
Abstract: This paper proposes confidence intervals for a single mean and difference of two means of normal distributions with unknown coefficients of variation (CVs). The generalized confidence interval (GCI) approach and large sample (LS) approach were proposed to construct confidence intervals for the single normal mean with unknown CV. These confidence intervals were compared with existing confidence interval for the single normal mean based on the Student’s t-distribution (small sample size case) and the z-distribution (large sample size case). Furthermore, the confidence intervals for the difference between two normal means with unknown CVs were constructed based on the GCI approach, the method of variance estimates recovery (MOVER) approach and the LS approach and then compared with the Welch–Satterthwaite (WS) approach. The coverage probability and average length of the proposed confidence intervals were evaluated via Monte Carlo simulation. The results indicated that the GCIs for the single normal mean and the difference of two normal means with unknown CVs are better than the other confidence intervals. Finally, three datasets are given to illustrate the proposed confidence intervals.

14 citations

Journal ArticleDOI
TL;DR: In this paper, a broad class of confidence intervals for μ1−μ2 with minimum coverage probability 1 −α is considered. But the authors focus on the question of whether condition (a) can be satisfied, i.e., there does not exist any confidence interval belonging to that utilizes the prior information substantially better than J0.
Abstract: Summary Consider two independent random samples of size f + 1, one from an N (μ1, σ21) distribution and the other from an N (μ2, σ22) distribution, where σ21/σ22∈ (0, ∞). The Welch ‘approximate degrees of freedom’ (‘approximate t-solution’) confidence interval for μ1−μ2 is commonly used when it cannot be guaranteed that σ21/σ22= 1. Kabaila (2005, Comm. Statist. Theory and Methods 34, 291–302) multiplied the half-width of this interval by a positive constant so that the resulting interval, denoted by J0, has minimum coverage probability 1 −α. Now suppose that we have uncertain prior information that σ21/σ22= 1. We consider a broad class of confidence intervals for μ1−μ2 with minimum coverage probability 1 −α. This class includes the interval J0, which we use as the standard against which other members of will be judged. A confidence interval utilizes the prior information substantially better than J0 if (expected length of J)/(expected length of J0) is (a) substantially less than 1 (less than 0.96, say) for σ21/σ22= 1, and (b) not too much larger than 1 for all other values of σ21/σ22. For a given f, does there exist a confidence interval that satisfies these conditions? We focus on the question of whether condition (a) can be satisfied. For each given f, we compute a lower bound to the minimum over of (expected length of J)/(expected length of J0) when σ21/σ22= 1. For 1 −α= 0.95, this lower bound is not substantially less than 1. Thus, there does not exist any confidence interval belonging to that utilizes the prior information substantially better than J0.

14 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
86% related
Statistical hypothesis testing
19.5K papers, 1M citations
80% related
Linear model
19K papers, 1M citations
79% related
Markov chain
51.9K papers, 1.3M citations
79% related
Multivariate statistics
18.4K papers, 1M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
202363
2022153
2021142
2020151
2019142