scispace - formally typeset
Search or ask a question

Showing papers on "Cumulative distribution function published in 2012"


Journal ArticleDOI
TL;DR: The key idea is to align the complexity level and order of analysis with the reliability and detail level of statistical information on the input parameters to avoid the necessity to assign parametric probability distributions that are not sufficiently supported by limited available data.

350 citations


Journal ArticleDOI
TL;DR: For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by the standard Gaussian complementary cumulative distribution function.
Abstract: This paper studies the minimum achievable source coding rate as a function of blocklength n and probability ϵ that the distortion exceeds a given level d . Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by R(d) + √V(d)/(n) Q-1(ϵ), where R(d) is the rate-distortion function, V(d) is the rate dispersion, a characteristic of the source which measures its stochastic variability, and Q-1(·) is the inverse of the standard Gaussian complementary cumulative distribution function.

248 citations


Journal ArticleDOI
TL;DR: In this article, an analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition, and an extensive set of Monte Carlo simulations were conducted to examine whether the expressions derived under this assumption allowed for accurate prediction of the empirical c-statistic.
Abstract: When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.

219 citations


Journal ArticleDOI
TL;DR: It is shown how the proposed EW distribution offers an excellent fit to simulation and experimental data under all aperture averaging conditions, under weak and moderate turbulence conditions, as well as for point-like apertures.
Abstract: Nowadays, the search for a distribution capable of modeling the probability density function (PDF) of irradiance data under all conditions of atmospheric turbulence in the presence of aperture averaging still continues. Here, a family of PDFs alternative to the widely accepted Log-Normal and Gamma-Gamma distributions is proposed to model the PDF of the received optical power in free-space optical communications, namely, the Weibull and the exponentiated Weibull (EW) distribution. Particularly, it is shown how the proposed EW distribution offers an excellent fit to simulation and experimental data under all aperture averaging conditions, under weak and moderate turbulence conditions, as well as for point-like apertures. Another very attractive property of these distributions is the simple closed form expression of their respective PDF and cumulative distribution function.

152 citations


Journal ArticleDOI
TL;DR: A formalisation of delay propagation by means of an activity graph is presented to outline the required mathematical operations to traverse the graph and to elaborate a suitable class of distribution functions to describe the delays as random variables.

120 citations


Journal ArticleDOI
TL;DR: Analysis of the decode-and-forward (DF) protocol in the free space optical (FSO) links following the Gamma-Gamma distribution and average bit error rate of the DF relaying is obtained.
Abstract: We analyze performance of the decode-and-forward (DF) protocol in the free space optical (FSO) links following the Gamma-Gamma distribution. The cumulative distribution function (cdf) and probability density function (pdf) of a random variable containing mixture of the Gamma-Gamma and Gaussian random variables is derived. By using the derived cdf and pdf, average bit error rate of the DF relaying is obtained.

116 citations


Journal ArticleDOI
TL;DR: The performance of two-way amplify-and-forward (AF) relaying networks, considering transmissions over independent but not necessarily identically distributed Rayleigh fading channels in the presence of a finite number of co-channel interferers, is studied.
Abstract: The performance of two-way amplify-and-forward (AF) relaying networks, considering transmissions over independent but not necessarily identically distributed Rayleigh fading channels in the presence of a finite number of co-channel interferers, is studied. Specifically, closed-form expressions for the cumulative distribution function (CDF) of the equivalent signal-to-interference-plus-noise ratio (SINR), the error probability, the outage probability and the system's achievable rate, are presented. Furthermore, an asymptotic expression for the probability density function (PDF) of the equivalent instantaneous SINR is derived, based on which simple and general asymptotic formulas for the error and outage probabilities are derived and analyzed. Numerical results are also provided, sustained by simulations which corroborate the exactness of the theoretical analysis.

111 citations


Journal ArticleDOI
TL;DR: The ergodic capacity of multihop wireless networks in the presence of external interference is studied and a simple and general asymptotic expression for the error probability is presented and discussed.
Abstract: A study of the effect of cochannel interference on the performance of multihop wireless networks with amplify-and-forward (AF) relaying is presented. Considering that transmissions are performed over Rayleigh fading channels, first, the exact end-to-end signal-to-interference-plus-noise ratio (SINR) of the communication system is formulated and upper bounded. Then, the cumulative distribution function (cdf) and probability density function (pdf) of the upper bounded end-to-end SINR are determined. Based on those results, closed-form expression for the error probability is obtained. Furthermore, an approximate expression for the pdf of the instantaneous end-to-end SINR is derived, and based on this, a simple and general asymptotic expression for the error probability is presented and discussed. In addition, the ergodic capacity of multihop wireless networks in the presence of external interference is studied. Moreover, analytical comparisons between AF and decode-and-forward (DP) multihop in terms of error probability and ergodic capacity are presented. Finally, optimization of the power allocation at the network's transmit nodes and the positioning of the relays are addressed.

97 citations


Journal ArticleDOI
TL;DR: In this paper, a relation between the continuous ranked probability score (CRPS) score and the quantile score is established, and the evaluation of "raw" ensembles using the CRPS is discussed in this light.
Abstract: The continuous ranked probability score (CRPS) is a frequently used scoring rule. In contrast with many other scoring rules, the CRPS evaluates cumulative distribution functions. An ensemble of forecasts can easily be converted into a piecewise constant cumulative distribution function with steps at the ensemble members. This renders the CRPS a convenient scoring rule for the evaluation of ‘raw’ ensembles, obviating the need for sophisticated ensemble model output statistics or dressing methods prior to evaluation. In this article, a relation between the CRPS score and the quantile score is established. The evaluation of ‘raw’ ensembles using the CRPS is discussed in this light. It is shown that latent in this evaluation is an interpretation of the ensemble as quantiles but with non-uniform levels. This needs to be taken into account if the ensemble is evaluated further, for example with rank histograms.

87 citations


Journal ArticleDOI
01 Oct 2012
TL;DR: In this paper, a second-order reliability method (SORM) using non-central or general chi-squared distribution was proposed to improve the accuracy of reliability analysis in existing SORM.
Abstract: This paper proposes a novel second-order reliability method (SORM) using non-central or general chi-squared distribution to improve the accuracy of reliability analysis in existing SORM. Conventional SORM contains three types of errors: (1) error due to approximating a general nonlinear limit state function by a quadratic function at most probable point (MPP) in the standard normal U-space, (2) error due to approximating the quadratic function in U-space by a hyperbolic surface, and (3) error due to calculation of the probability of failure after making the previous two approximations. The proposed method contains the first type of error only which is essential to SORM and thus cannot be improved. However, the proposed method avoids the other two errors by describing the quadratic failure surface with the linear combination of non-central chi-square variables and using the linear combination for the probability of failure estimation. Two approaches for the proposed SORM are suggested in the paper. The first approach directly calculates the probability of failure using numerical integration of the joint probability density function (PDF) over the linear failure surface and the second approach uses the cumulative distribution function (CDF) of the linear failure surface for the calculation of the probability of failure. The proposed method is compared with first-order reliability method (FORM), conventional SORM, and Monte Carlo simulation (MCS) results in terms of accuracy. Since it contains fewer approximations, the proposed method shows more accurate reliability analysis results than existing SORM without sacrificing efficiency.Copyright © 2012 by ASME

78 citations


Journal ArticleDOI
TL;DR: It is demonstrated by the analysis and simulation that max-min criterion based path selection works very well in the multi-hop DF cooperative system.
Abstract: In this letter, we find cumulative distribution function (CDF) and probability density function (PDF) of the generalized max-min Exponential random variable (RV) in terms of converging power series. By using the PDF of this RV, the average bit error rate (BER) of the max-min criterion based best path selection scheme in multi-hop decode-and-forward (DF) cooperative communication system over Rayleigh fading channels is analyzed. It is demonstrated by the analysis and simulation that max-min criterion based path selection works very well in the multi-hop DF cooperative system.

Journal Article
TL;DR: In this paper, a new family of distributions called exponentiated Lomax distribution (ELD) is proposed, which generalizes the LOMAX distribution by powering a positive real number α to the cumulative distribution function (CDF).
Abstract: In this paper, we generalize the Lomax distribution by powering a positive real number α to the cumulative distribution function (CDF). This new family of distributions called exponentiated Lomax distribution (ELD). Some properties of this family will be discussed. The estimation of unknown parameters for ELD will be handled using maximum likelihood, Moments and L-moments methods.

Journal ArticleDOI
TL;DR: This work shows how fluid-approximation techniques may be used to extract passage-time measures from performance models, focusing on two types of passage measure: passage times involving individual components, as well as passage times which capture the time taken for a population of components to evolve.

Journal ArticleDOI
TL;DR: It is shown that the proposed finite RMT-based algorithms outperform all similar alternatives currently known in the literature, at a substantially lower complexity.
Abstract: We address the Primary User (PU) detection (spectrum sensing) problem, relevant to cognitive radio, from a finite random matrix theoretical (RMT) perspective. Specifically, we employ recently-derived closed-form and exact expressions for the distribution of the standard condition number (SCN) of uncorrelated and semi-correlated random dual central Wishart matrices of finite sizes in the design Hypothesis-Testing algorithms to detect the presence of PU signals. In particular, two algorithms are designed, with basis on the SCN distribution in the absence (H0) and in the presence (H1) of PU signals, respectively. Due to an inherent property of the SCN's, the H0 test requires no estimation of SNR or any other information on the PU signal, while the H1 test requires SNR only. Further attractive advantages of the new techniques are: a) due to the accuracy of the finite SCN distributions, superior performance is achieved under a finite number of samples, compared to asymptotic RMT-based alternatives; b) since expressions to model the SCN statistics both in the absence and presence of PU signal are used, the statistics of the spectrum sensing problem in question is completely characterized; and c) as a consequence of a) and b), accurate and simple analytical expressions for the receiver operating characteristic (ROC) - both in terms of the probability of detection as a function of the probability of false alarm (PD versus PF) and in terms of the probability of acquisition as a function of the probability of miss detection (PA versus PM) - are yielded. It is also shown that the proposed finite RMT-based algorithms outperform all similar alternatives currently known in the literature, at a substantially lower complexity. In the process, several new results on the distributions of eigenvalues and SCNs of random Wishart Matrices are offered, including a closed-form of the Marchenko-Pastur's Cumulative Density Function (CDF) and extensions of the latter, as well as variations of asymptotic the distributions of extreme eigenvalues (Tracy-Widom) and their ratio (Tracy-Widom-Curtiss), which are simpler than those obtained with the "spiked population model".

Journal ArticleDOI
TL;DR: In this article, the authors deal with the mathematical problem of estimation of the nonextensive parameters of the earthquake cumulative magnitude distribution by means of the maximum likelihood method, which is a particular case of the Gutenberg-Richter law.
Abstract: Recently, interest in the application of nonextensive models to the earthquake cumulative magnitude distribution (ECMD) has been increasing. Nonextensivity in seismicity leads to a new form of the cumulative distribution of earthquake magnitudes, of which the Gutenberg–Richter law can be considered as a particular case. The present study deals with the mathematical problem of estimation of the nonextensive parameters of the ECMD by means of the maximum likelihood method.

Proceedings ArticleDOI
01 Jul 2012
TL;DR: This paper studies the maximum achievable rate region of multiple access channels (MAC) for a given blocklength n and a desired error probability ϵ and provides general converse bounds for both average error probability and maximum error probability criteria.
Abstract: This paper studies the maximum achievable rate region of multiple access channels (MAC) for a given blocklength n and a desired error probability ∊. The inner region for the discrete memoryless MAC is approximated by a single-lettered expression I − 1/√n Q inv (V, ∊) where I is associated with the capacity pentagon bounds by Ahlswede and Liao, V is the MAC dispersion matrix, and Q inv is the complementary multivariate Gaussian cumulative distribution region. For outer regions, we provide general converse bounds for both average error probability and maximum error probability criteria, and a single-lettered approximation for the discrete memoryless MAC.

Book
31 Mar 2012
TL;DR: In this article, the authors present a method for estimating the probability of a concrete wall being constructed by using Bayes' rule for bridge upgrading and demonstrate the feasibility of the proposed method.
Abstract: ENGINEERING DECISIONS UNDER UNCERTAINTY.- Lecture 1.- 1.1 Introduction.- 1.2 Societal Decision Making and Risk.- 1.2.1 Example 1.1 - Feasibility of Hydraulic Power Plant .- 1.3 Definition of Risk.- 1.4 Self Assessment Questions / Exercises .- 2 BASIC PROBABILITY THEORY .- Lecture 2.- 2.1 Introduction .- 2.2 Definition of Probability.- 2.2.1 Frequentistic Definition.- 2.2.3 Bayesian Definition.- 2.2.4 Practical Implications of the Different Interpretations of Probability.- 2.3 Sample Space and Events.- 2.4 The three Axioms of Probability Theory.- 2.5 Conditional Probability and Bayes' Rule.- 2.5.1 Example 2.1 - Using Bayes' Rule for Concrete Assessment .- 2.5.2 Example 2.2 - Using Bayes' Rule for Bridge Upgrading.- 2.6 Self Assessment Questions / Exercises.- 3 DESCRIPTIVE STATISTICS.- Lecture 3 .- 3.1 Introduction.- 3.2 Numerical Summaries.- 3.2.1 Central Measures.- 3.2.2 Example 3.1 - Concrete Compressive Strength Data.- 3.2.3 Example 3.2 - Traffic Flow Data.- 3.2.4 Dispersion Measures.- 3.2.5 Other Measures.- 3.2.6 Sample Moments and Sample Central Moments.- 3.2.7 Measures of Correlation.- 3.3 Graphical Representations.- 3.3.1 One-Dimensional Scatter Diagrams.- 3.3.2 Histograms.- 3.3.3 Quantile Plots.- 3.3.4 Tukey Box Plots.- 3.3.5 Q-Q Plots and Tukey Mean-Difference Plot.- 3.4 Self Assessment Questions / Exercises.- 4 UNCERTAINTY MODELLING.- Lecture 4.- 4.1 Introduction.- 4.2 Uncertainties in Engineering Problems.- 4.3 Random Variables.- 4.3.1 Cumulative Distribution and Probability Density Functions.- 4.3.2 Moments of Random Variables and the Expectation Operator.- 4.3.3 Example 4.1 - Uniform distribution.- Lecture 5.- 4.3.4 Properties of the Expectation Operator.- 4.3.5 Random Vectors and Joint Moments.- 4.3.6 Example 4.2 - linear combinations and random variables.- 4.3.7 Conditional Distributions and Conditional Moments .- 4.3.8 The Probability Distribution for the Sum of two Random Variables .- 4.3.9 Example 4.3 - Density Function for the Sum of two Random Variables - Special Case Normal Distribution.- 4.3.10 The Probability Distribution for Functions of Random Variables .- 4.3.11 Example 4.4 - Probability Distribution for a Function of Random Variables.- Lecture 6.- 4.3.12 Probability Density and Distribution Functions.- 4.3.13 The Central Limit Theorem and Derived Distributions.- 4.3.14 Example 4.5 - Central Limit Theorem.- 4.3.15 The Normal Distribution.- 4.3.16 The Lognormal Distribution.- 4.4 Stochastic Processes and Extremes.- 4.4.1 Random Sequences - Bernoulli Trials.- 4.4.2 Example 4.6 - Quality Control of Concrete.- Lecture 7 .- 4.4.3 The Poisson Counting Process .- 4.4.4 Continuous Random Processes.- 4.4.5 Stationarity and Ergodicity.- 4.4.6 Statistical Assessment of Extreme Values.- 4.4.7 Extreme Value Distributions.- 4.4.8 Type I Extreme Maximum Value Distribution - Gumbel max.- 4.4.9 Type I Extreme Minimum Value Distribution - Gumbel min.- 4.4.10 Type II Extreme Maximum Value Distribution - Frechet max.- 4.4.11 Type III Extreme Minimum Value Distribution - Weibull min.- 4.4.12 Return Period for Extreme Events.- 4.4.13 Example 4.7 - A Flood with a 100-Year Return Period.- 4.5 Self Assessment Questions / Exercises.- 5 ESTIMATION AND MODEL BUILDING.- Lecture 8.- 5.1 Introduction .- 5.2 Selection of Probability Distributions.- 5.2.1 Model Selection by Use of Probability Paper.- 5.3 Estimation of Distribution Parameters.- 5.3.1 The Method of Moments.- 5.3.2 The Method of Maximum Likelihood.- 5.3.3 Example 5.1 - Parameter Estimation.- Lecture 9.- 5.4 Bayesian Estimation Methods.- 5.4.1 Example 5.2 - Yield Stress of a Steel Bar.- 5.5 Bayesian Regression Analysis.- 5.5.1 Linear Regression: Prior Model.- 5.5.2 Example 5.3 - Tensile Strength of Timber: Prior Model.- 5.5.3 Updating Regression Coefficients: Posterior Model.- 5.5.4 Example 5.4 - Updating Regression Coefficients (determined in Example 5.3).- Lecture 10.- 5.6 Probability Distributions in Statistics.- 5.6.1 The Chi-Square (c2)-Distribution.- 5.6.2 The Chi (c)-Distribution.- 5.7 Estimators for Sample Descriptors - Sample Statistics.- 5.7.1 Statistical Characteristics of the Sample Average .- 5.7.2 Statistical Characteristics of the Sample Variance.- 5.7.3 Confidence Intervals.- 5.8 Testing for Statistical Significance.- 5.8.1 The Hypothesis Testing Procedure .- 5.8.2 Testing of the Mean with Known Variance.- 5.8.3 Some Remarks on Testing.- Lecture 11.- 5.9 Model Evaluation by Statistical Testing.- 5.9.1 The Chi-Square (c2)-Goodness of Fit Test.- 5.9.2 The Kolmogorov-Smirnov Goodness of Fit Test.- 5.9.3 Model Comparison.- 5.10 Self Assessment Questions / Exercises.- 6 METHODS OF STRUCTURAL RELIABILITY.- Lecture 12.- 6.1 Introduction.- 6.2 Failure Events and Basic Random Variables.- 6.3 Linear Limit State Functions and Normal Distributed Variables.- 6.3.1 Example 6.1 - Reliability of a Steel Rod - Linear Safety Margin.- 6.4 The Error Propagation Law.- 6.4.1 Example 6.2 - Error Propagation Law.- 6.5 Non-linear Limit State Functions.- 6.5.1 Example 6.3 - FORM - Non-linear Limit State Function.- 6.6 Simulation Methods.- 6.6.1 Example 6.4: Monte Carlo Simulation.- 6.7 Self Assessment Questions / Exercises.- 7 BAYESIAN DECISION ANALYSIS.- Lecture 13.- 7.1 Introduction.- 7.2 The Decision / Event Tree.- 7.3 Decisions Based on Expected Values.- 7.4 Decision Making Subject to Uncertainty.- 7.5 Decision Analysis with Given Information - Prior Analysis.- 7.6 Decision Analysis with Additional Information - Posterior Analysis.- 7.7 Decision Analysis with 'Unknown' Information - Pre-posterior Analysis.- 7.8 The Risk Treatment Decision Problem.- 7.9 Self Assessment Questions / Exercises.- A ANSWERS TO SELF ASSESSMENT QUESTIONS.- A.1 Chapter 1.- A.2 Chapter 2.- A.3 Chapter 3.- A.4 Chapter 4.- A.5 Chapter 5.- A.6 Chapter 6.- A.7 Chapter 7.- B EXAMPLES OF CALCULATIONS.- B.1 Chapter 5.- B.1.1 Equation 5.67.- B.1.2 Equation 5.71.- B.1.3 Examples on Chi-square significance test.- B.2 Chapter 6.- B.2.1 Example 6.2.- B.2.2 Example 6.3.- C TABLES.- References.- Index .

Journal ArticleDOI
TL;DR: In this paper, a method to estimate an extreme quantile that requires no distributional assumptions is presented, which is based on transformed kernel estimation of the cumulative distribution function (cdf).
Abstract: A method to estimate an extreme quantile that requires no distributional assumptions is presented. The approach is based on transformed kernel estimation of the cumulative distribution function (cdf). The proposed method consists of a double transformation kernel estimation. We derive optimal bandwidth selection methods that have a direct expression for the smoothing parameter. The bandwidth can accommodate to the given quantile level. The procedure is useful for large data sets and improves quantile estimation compared to other methods in heavy tailed distributions. Implementation is straightforward and R programs are available.

Journal ArticleDOI
TL;DR: In this paper, the performance of probability estimation methods for reliability analysis is investigated, and a comparative study of the four probability estimation algorithms is carried out to derive the general guidelines for selecting the most appropriate probability estimation method.
Abstract: In this paper we investigate the performance of probability estimation methods for reliability analysis. The probability estimation methods typically construct the probability density function (PDF) of a system response using estimated statistical moments, and then perform reliability analysis based on the approximate PDF. In recent years, a number of probability estimation methods have been proposed, such as the Pearson system, saddlepoint approximation, Maximum Entropy Principle (MEP), and Johnson system. However, no general guideline to suggest a most appropriate probability estimation method has yet been proposed. In this study, we carry out a comparative study of the four probability estimation methods so as to derive the general guidelines. Several comparison metrics are proposed to quantify the accuracy in the PDF approximation, cumulative density function (CDF) approximation and tail probability estimations (or reliability analysis). This comparative study gives an insightful guidance for selecting the most appropriate probability estimation method for reliability analysis. The four probability estimation methods are extensively tested with one mathematical and two engineering examples, each of which considers eight different combinations of the system response characteristics in terms of response boundness, skewness, and kurtosis.

Journal ArticleDOI
TL;DR: This paper analyzes the average throughput and outage probability of the multirelay delay-limited (DL) HARQ system with an opportunistic relaying scheme in decode-and-forward (DF) mode, in which the best relay is selected to transmit the source's regenerated signal.
Abstract: We consider a half-duplex wireless relay network with hybrid-automatic retransmission request (HARQ) and Rayleigh fading channels. In this paper, we analyze the average throughput and outage probability of the multirelay delay-limited (DL) HARQ system with an opportunistic relaying scheme in decode-and-forward (DF) mode, in which the best relay is selected to transmit the source's regenerated signal. A simple and distributed relay selection strategy is considered for multirelay HARQ channels. Then, we utilize the nonorthogonal cooperative transmission between the source and selected relay for retransmission of source data toward the destination, if needed, using space-time codes. We analyze the performance of the system. We first derive the cumulative density function (cdf) and probability density function (pdf) of the selected relay HARQ channels. Then, the cdf and pdf are used to determine the exact outage probability in the lth round of HARQ. The outage probability is required to compute the throughput-delay performance of this half-duplex opportunistic relaying protocol. The packet delay constraint is represented by L, which is the maximum number of HARQ rounds. An outage is declared if the packet is unsuccessful after L HARQ rounds. Furthermore, simple closed-form upper bounds on outage probability are derived. Based on the derived upper bound expressions, it is shown that the proposed schemes achieve the full spatial diversity order of N + 1, where N is the number of potential relays. Our analytical results are confirmed by simulation results. In addition, simulation shows that our proposed scheme can achieve higher average throughput, compared with direct transmission and conventional two-phase relay networks.

Journal ArticleDOI
TL;DR: It will be seen that the additional degrees of freedom associated with Bartlett's method empower it to offer excellent performance, and its probability of false alarm is only superior at relatively higher sensing thresholds.
Abstract: Energy detection can be used for spectrum sensing in cognitive radios when no prior knowledge about the primary signals is available. The performance of this technique, however, is strongly influenced by the available decision estimate. This paper presents accurate performance analysis of energy detection when using Bartlett's estimate as a test statistic. Both independent and identically distributed Rayleigh and Rician fading channels are investigated for unknown signals with complex envelopes. The novel contribution here is threefold. First, the quadratic form representation of Bartlett's estimate is formulated. Then, starting from the characteristic function, the cumulative distribution function is derived for each type of channel and accurate expressions are developed for the probabilities of false alarm and missed detection. Finally, a performance comparison with the raw periodogram is presented. The accuracy of the proposed analysis is confirmed using Monte Carlo trials. The results provide valuable insight into the performance of Bartlett's based energy detection. It will be seen that the additional degrees of freedom associated with Bartlett's method empower it to offer excellent performance. When compared with the raw periodogram, Bartlett's consistently leads to lower probability of miss, but its probability of false alarm is only superior at relatively higher sensing thresholds.

Journal ArticleDOI
TL;DR: In this paper, a time-varying probability density function, or the corresponding cumulative distribution function, is estimated nonparametrically by using a kernel and weighting the observations using schemes derived from time series modelling.

Proceedings ArticleDOI
17 Jun 2012
TL;DR: The probability distribution function (PDF) and cumulative density function of the sum of L independent but not necessarily identically distributed gamma variates, applicable to maximal ratio combining receiver outputs or in other words applicable to the performance analysis of diversity combining receivers operating over Nakagami-m fading channels, is presented in closed form.
Abstract: The probability distribution function (PDF) and cumulative density function of the sum of L independent but not necessarily identically distributed gamma variates, applicable to maximal ratio combining receiver outputs or in other words applicable to the performance analysis of diversity combining receivers operating over Nakagami-m fading channels, is presented in closed form in terms of Meijer G-function and Fox H-function for integer valued fading parameters and non-integer valued fading parameters, respectively Further analysis, particularly on bit error rate via PDF-based approach, too is represented in closed form in terms of Meijer G-function and Fox H-function for integer-order fading parameters, and extended Fox H-function (Ĥ) for non-integer-order fading parameters The proposed results complement previous results that are either evolved in closed-form, or expressed in terms of infinite sums or higher order derivatives of the fading parameter m

Journal ArticleDOI
TL;DR: It is shown that the exact decision threshold based B-GLRT detector gives superior performance over the asymptotic decision threshold schemes proposed in the literature, which leads to efficient spectrum usage in cognitive radio.
Abstract: This correspondence investigates the statistical properties of the ratio T = λ1/Σi=1mλi , where are λ1 ≥ λ2 ≥ ··· ≥ λm the m eigenvalues of an m × m complex central Wishart matrix W with n degrees of freedom. We derive new exact analytical expressions for the probability density function (PDF) and cumulative distribution function (CDF) of T for complex central Wishart matrices with arbitrary dimensions. We also formulate simplified statistics of T for the special case of dual uncorrelated and dual correlated complex central Wishart matrices (m = 2) . The investigated ratio T is the most important ratio in blind spectrum sensing, since it represents a sufficient statistics for the generalized likelihood ratio test (GLRT). Thus, the derived analytical results are used to find the exact decision threshold for the desired probability of false alarm for Blind-GLRT (B-GLRT) detector. It is shown that the exact decision threshold based B-GLRT detector gives superior performance over the asymptotic decision threshold schemes proposed in the literature, which leads to efficient spectrum usage in cognitive radio.

Journal ArticleDOI
TL;DR: In this paper, a portfolio of n dependent risks X 1, …, X n is considered and the stochastic behavior of the aggregate claim amount S = X 1 + ⋯ + X n.
Abstract: In this paper, we consider a portfolio of n dependent risks X 1 , … , X n and we study the stochastic behavior of the aggregate claim amount S = X 1 + ⋯ + X n . Our objective is to determine the amount of economic capital needed for the whole portfolio and to compute the amount of capital to be allocated to each risk X 1 , … , X n . To do so, we use a top–down approach. For ( X 1 , … , X n ) , we consider risk models based on multivariate compound distributions defined with a multivariate counting distribution. We use the TVaR to evaluate the total capital requirement of the portfolio based on the distribution of S , and we use the TVaR-based capital allocation method to quantify the contribution of each risk. To simplify the presentation, the claim amounts are assumed to be continuously distributed. For multivariate compound distributions with continuous claim amounts, we provide general formulas for the cumulative distribution function of S , for the TVaR of S and the contribution to each risk. We obtain closed-form expressions for those quantities for multivariate compound distributions with gamma and mixed Erlang claim amounts. Finally, we treat in detail the multivariate compound Poisson distribution case. Numerical examples are provided in order to examine the impact of the dependence relation on the TVaR of S , the contribution to each risk of the portfolio, and the benefit of the aggregation of several risks.

Journal ArticleDOI
TL;DR: This paper considers the analytical performance of primary users subject to interference due to secondary users (SU) in an underlay cognitive radio system over Rayleigh fading and indicates that the PU can achieve the full diversity gain given a non-zero protective region around the PU.
Abstract: This paper considers the analytical performance of primary users (PUs) subject to interference due to secondary users (SU) in an underlay cognitive radio system over Rayleigh fading. In particular, we focus on a more general spatial configuration where the interfered PU, not only located at the center of the cell, is having a protective region which is free of SUs and the SUs are distributed over a finite area in contrast to the commonly used infinite area assumption. We first characterize the statistical properties of the aggregate interference at the PU due to SUs, by deriving new exact closed form expressions for the moment generating function, cumulants, first, second and third moments and first order expansions of the cumulative distribution functions corresponding to propagation scenarios with path loss factors, two and four. We then investigate the PU performance by presenting new analytical expressions for the outage probability, amount of fading as well as the diversity order and coding gain. Our results indicate that the PU can achieve the full diversity gain given a non-zero protective region around the PU.

Journal ArticleDOI
TL;DR: In this article, an alternative resampling technique based on a fast weighted bootstrap is proposed, which can be used as a large-sample alternative to the parametric bootstrap.
Abstract: The process comparing the empirical cumulative distribution function of the sample with a parametric estimate of the cumulative distribution function is known as the empirical process with estimated parameters and has been extensively employed in the literature for goodness-of-fit testing. The simplest way to carry out such goodness-of-fit tests, especially in a multivariate setting, is to use a parametric bootstrap. Although very easy to implement, the parametric bootstrap can become very computationally expensive as the sample size, the number of parameters, or the dimension of the data increase. An alternative resampling technique based on a fast weighted bootstrap is proposed in this paper, and is studied both theoretically and empirically. The outcome of this work is a generic and computationally efficient multiplier goodness-of-fit procedure that can be used as a large-sample alternative to the parametric bootstrap. In order to approximately determine how large the sample size needs to be for the parametric and weighted bootstraps to have roughly equivalent powers, extensive Monte Carlo experiments are carried out in dimension one, two and three, and for models containing up to nine parameters. The computational gains resulting from the use of the proposed multiplier goodness-of-fit procedure are illustrated on trivariate financial data. A by-product of this work is a fast large-sample goodness-of-fit procedure for the bivariate and trivariate t distribution whose degrees of freedom are fixed. The Canadian Journal of Statistics 40: 480–500; 2012 © 2012 Statistical Society of Canada

Journal ArticleDOI
TL;DR: By considering approximations for the statistics of the postprocessing SNR, it is shown that the investigated system achieves full-diversity orders in the ideal case, whereas it keeps maintaining receive diversity at practical impairments.
Abstract: This paper focuses on multi-antenna systems that employ both maximal-ratio transmission and receive antenna selection (MRT&RAS) in independent and identically distributed Nakagami-m flat-fading channels with channel estimation errors (CEE) [or feedback quantization errors (FQE)] and feedback delay (FD). Useful statistics of the postprocessing signal-to-noise ratio (SNR) such as the probability density function, cumulative distribution function, moment-generating function, and th-order moments are presented. To examine the capacity and the error performances of the MRT&RAS scheme, ergodic capacity, outage probability, and bit error rates/symbol error rates (BERs/SERs) for binary and M-ary modulations are analyzed. Exact analytical expressions for all the performance metrics are separately derived for practical situations that consist of CEE (or FQE) and FD, as well as ideal estimation and feedback conditions. By considering approximations for the statistics of the postprocessing SNR, it is shown that, at high SNRs, the investigated system achieves full-diversity orders in the ideal case, whereas it keeps maintaining receive diversity at practical impairments. The performance superiority of the MRT&RAS scheme to other diversity schemes is also shown by numerical comparisons. In addition, analytical performance results related to the outage probability and BERs/SERs are validated by Monte Carlo simulations.

Journal ArticleDOI
TL;DR: Considering dual-hop channel state information (CSI)-assisted amplify-and-forward (AF) relaying over Nakagami- m fading channels, the cumulative distribution function (CDF) of the end-to-end signal- to-noise ratio (SNR) is derived.
Abstract: In this correspondence, considering dual-hop channel state information (CSI)-assisted amplify-and-forward (AF) relaying over Nakagami- m fading channels, the cumulative distribution function (CDF) of the end-to-end signal-to-noise ratio (SNR) is derived. In particular, when the fading shape factors m1 and m2 at consecutive hops take non-integer values, the bivariate H-function and G -function are exploited to obtain an exact analytical expression for the CDF. The obtained CDF is then applied to evaluate the outage performance of the system under study. The analytical results of outage probability coincide exactly with Monte-Carlo simulation results and outperform the previously reported upper bounds in the low and medium SNR regions.

Journal ArticleDOI
TL;DR: This article proposed a flexible method to approximate the subjective cumulative distribution function of an economic agent about the future realization of a continuous random variable, which can closely approximate a wide variety of distributions while maintaining weak assumptions on the shape of distribution functions.
Abstract: We propose a flexible method to approximate the subjective cumulative distribution function of an economic agent about the future realization of a continuous random variable. The method can closely approximate a wide variety of distributions while maintaining weak assumptions on the shape of distribution functions. We show how moments and quantiles of general functions of the random variable can be computed analytically and/or numerically. We illustrate the method by revisiting the determinants of income expectations in the United States. A Monte Carlo analysis suggests that a quantile-based flexible approach can be used to successfully deal with censoring and possible rounding levels present in the data. Finally, our analysis suggests that the performance of our flexible approach matches that of a correctly specified parametric approach and is clearly better than that of a misspecified parametric approach.