scispace - formally typeset
Search or ask a question

Showing papers on "Cumulative distribution function published in 2009"


Journal ArticleDOI
TL;DR: In this paper, a review of the use of the probability density function (PDF) of wind speed is carried out for a wide collection of models, and the methods that have been used to estimate the parameters on which these models depend are reviewed and the degree of complexity of the estimation is analyzed in function of the model selected.
Abstract: The probability density function (PDF) of wind speed is important in numerous wind energy applications. A large number of studies have been published in scientific literature related to renewable energies that propose the use of a variety of PDFs to describe wind speed frequency distributions. In this paper a review of these PDFs is carried out. The flexibility and usefulness of the PDFs in the description of different wind regimes (high frequencies of null winds, unimodal, bimodal, bitangential regimes, etc.) is analysed for a wide collection of models. Likewise, the methods that have been used to estimate the parameters on which these models depend are reviewed and the degree of complexity of the estimation is analysed in function of the model selected: these are the method of moments (MM), the maximum likelihood method (MLM) and the least squares method (LSM). In addition, a review is conducted of the statistical tests employed to see whether a sample of wind data comes from a population with a particular probability distribution. With the purpose of cataloguing the various PDFs, a comparison is made between them and the two parameter Weibull distribution (W.pdf), which has been the most widely used and accepted distribution in the specialised literature on wind energy and other renewable energy sources. This comparison is based on: (a) an analysis of the degree of fit of the continuous cumulative distribution functions (CDFs) for wind speed to the cumulative relative frequency histograms of hourly mean wind speeds recorded at weather stations located in the Canarian Archipelago; (b) an analysis of the degree of fit of the CDFs for wind power density to the cumulative relative frequency histograms of the cube of hourly mean wind speeds recorded at the aforementioned weather stations. The suitability of the distributions is judged from the coefficient of determination R2. Amongst the various conclusions obtained, it can be stated that the W.pdf presents a series of advantages with respect to the other PDFs analysed. However, the W.pdf cannot represent all the wind regimes encountered in nature such as, for example, those with high percentages of null wind speeds, bimodal distributions, etc. Therefore, its generalised use is not justified and it will be necessary to select the appropriate PDF for each wind regime in order to minimise errors in the estimation of the energy produced by a WECS (wind energy conversion system). In this sense, the extensive collection of PDFs proposed in this paper comprises a valuable catalogue.

690 citations


ReportDOI
TL;DR: In this article, the authors used control variables to identify and estimate models with nonseparable, multidimensional disturbances, with instruments and disturbances independent and a reduced form that is strictly monotonic in a scalar disturbance.
Abstract: This paper uses control variables to identify and estimate models with nonseparable, multidimensional disturbances. Triangular simultaneous equations models are considered, with instruments and disturbances that are independent and a reduced form that is strictly monotonic in a scalar disturbance. Here it is shown that the conditional cumulative distribution function of the endogenous variable given the instruments is a control variable. Also, for any control variable, identification results are given for quantile, average, and policy effects. Bounds are given when a common support assumption is not satisfied. Estimators of identified objects and bounds are provided, and a demand analysis empirical example is given.

460 citations


Journal ArticleDOI
TL;DR: The basic theory concerning the use of special functions, copulae, for dependence modeling is presented and focus is given on a basic function, the Normal copula, and the case study shows the application of the technique for the study of the large-scale integration of wind power in the Netherlands.
Abstract: The increasing penetration of renewable generation in power systems necessitates the modeling of this stochastic system infeed in operation and planning studies. The system analysis leads to multivariate uncertainty analysis problems, involving non-Normal correlated random variables. In this context, the modeling of stochastic dependence is paramount for obtaining accurate results; it corresponds to the concurrent behavior of the random variables, having a major impact to the aggregate uncertainty (in problems where the random variables correspond to spatially spread stochastic infeeds) or their evolution in time (in problems where the random variables correspond to infeeds over specific time-periods). In order to investigate, measure and model stochastic dependence, one should transform all different random variables to a common domain, the rank/uniform domain, by applying the cumulative distribution function transformation. In this domain, special functions, copulae, can be used for modeling dependence. In this contribution the basic theory concerning the use of these functions for dependence modeling is presented and focus is given on a basic function, the Normal copula. The case study shows the application of the technique for the study of the large-scale integration of wind power in the Netherlands.

351 citations


Journal ArticleDOI
TL;DR: Only in certain cases, for instance, in estimating a value of the cumulative distribution function and when the assumed model is very different from the true model, can the use of dichotomized outcomes be considered a reasonable approach.
Abstract: Dichotomization is the transformation of a continuous outcome (response) to a binary outcome. This approach, while somewhat common, is harmful from the viewpoint of statistical estimation and hypothesis testing. We show that this leads to loss of information, which can be large. For normally distributed data, this loss in terms of Fisher's information is at least 1-2/pi (or 36%). In other words, 100 continuous observations are statistically equivalent to 158 dichotomized observations. The amount of information lost depends greatly on the prior choice of cut points, with the optimal cut point depending upon the unknown parameters. The loss of information leads to loss of power or conversely a sample size increase to maintain power. Only in certain cases, for instance, in estimating a value of the cumulative distribution function and when the assumed model is very different from the true model, can the use of dichotomized outcomes be considered a reasonable approach.

232 citations


Proceedings ArticleDOI
30 Nov 2009
TL;DR: This paper analyzes the fading statistics of a generic fading distribution, termed the N-product Generalized Nakagami-m (GNM) distribution, constructed as the product of the power of N statistically independent and non-identically distributed GNM random variables, for the purpose of modeling the cascaded fading channels.
Abstract: In this paper, we analyze the fading statistics of a generic fading distribution, termed the N-product Generalized Nakagami-m (GNM) distribution (N*GNM distribution), constructed as the product of the power of N statistically independent and non-identically distributed GNM random variables, for the purpose of modeling the cascaded fading channels. In particular, using the Fox's H function, we derive the probability density function, the cumulative distribution function, the moment generating function and the moments of such channels in closed-form. These derived results are a convenient tool to statistically model the cascaded GNM fading channels and to analyze the performance of digital communication systems over these kinds of channels. As such, generic closed-form expressions for the amount of fading, the outage probability, the capacity, the outage capacity and the average bit error probabilities of digital communications systems over cascaded GNM fading channels are presented. Numerical and simulation results, performed to verify the correctness of the proposed formulation, are in perfect agreement.

179 citations


Journal ArticleDOI
TL;DR: A PMA-based RBDO method for problems with correlated random input variables using the Gaussian copula is developed, which can accurately estimates joint normal and some lognormal CDFs of the input variable that cover broad engineering applications.
Abstract: The reliability-based design optimization (RBDO) using performance measure approach for problems with correlated input variables requires a transformation from the correlated input random variables into independent standard normal variables. For the transformation with correlated input variables, the two most representative transformations, the Rosenblatt and Nataf transformations, are investigated. The Rosenblatt transformation requires a joint cumulative distribution function (CDF). Thus, the Rosenblatt transformation can be used only if the joint CDF is given or input variables are independent. In the Nataf transformation, the joint CDF is approximated using the Gaussian copula, marginal CDFs, and covariance of the input correlated variables. Using the generated CDF, the correlated input variables are transformed into correlated normal variables and then the correlated normal variables are transformed into independent standard normal variables through a linear transformation. Thus, the Nataf transformation can accurately estimates joint normal and some lognormal CDFs of the input variable that cover broad engineering applications. This paper develops a PMA-based RBDO method for problems with correlated random input variables using the Gaussian copula. Several numerical examples show that the correlated random input variables significantly affect RBDO results.

172 citations


Journal ArticleDOI
TL;DR: In this article, a method for solving a probabilistic power flow that deals with the uncertainties of (i) wind generation, (ii) load and (iii) generation availability in power systems is proposed.
Abstract: A method for solving a probabilistic power flow that deals with the uncertainties of (i) wind generation, (ii) load and (iii) generation availability in power systems is proposed. Dependence between random variables has been considered. The method is based on the properties of cumulants of random variables. Cornish-Fisher expansion series are used to obtain the cumulative distribution function (CDF) of the output variables. Multimodal CDF are obtained by convolutions, whose number has been minimised in order to decrease the computation requirements.

151 citations


Journal ArticleDOI
TL;DR: These results confirm that the end-to-end performance of a dual-hop fixed gain relaying system exhibits an improved performance in a Rician/Rayleigh (source-relay link/relay-destination link) environment compared to a Rayleigh/Rician environment.
Abstract: In real wireless communication environments, it is highly likely that different channels associated with a relay network could experience different fading phenomena. In this paper, we investigate the end-to-end performance of a dual-hop fixed gain relaying system when the source-relay and the relay-destination channels experience Rayleigh/Rician and Rician/Rayleigh fading scenarios respectively. Analytical expressions for the cumulative distribution function of the end-to-end signal-to-noise ratio are derived and used to evaluate the outage probability and the average bit error probability of M-QAM modulations. Numerical and simulation results are presented to illustrate the impact of the Rician factor on the end-to-end performance. Furthermore, these results confirm that the system exhibits an improved performance in a Rician/Rayleigh (source-relay link/relay-destination link) environment compared to a Rayleigh/Rician environment.

144 citations


Journal ArticleDOI
TL;DR: In this article, a probabilistic small signal stability assessment (PSSSA) methodology based on the application of Monte Carlo approach for iterative evaluation, via modal analysis of Small Signal Stability (SSS), is proposed.
Abstract: This paper proposes a probabilistic small signal stability assessment (PSSSA) methodology based on the application of Monte Carlo approach for iterative evaluation, via modal analysis of small signal stability (SSS). Operation states represented by random values of generation and demand are analyzed. A probabilistic instability risk index based on cumulative probability distribution function of damping ratios of oscillatory modes is calculated, as well as a power system stabilizer (PSS) devices location index based on eigenvectors and participation factors, which are considered random variables. Moreover, the impact of long-distance power flows on oscillatory modes (OM) and how the damping of OM depends on the orientation and magnitude of power flows is investigated. Further, an additional index concerns qualitatively the determination of transfer capability as affected by small signal stability. PSSSA is tested on a reduced order model of New England-New York's interconnected system considering uncertainties around three different system conditions separately: highly loaded, fairly loaded, and lowly loaded. The results highlight the main advantages of PSSSA over deterministic SSS studies such as instability risk assessment, small signal stability enhancement through adequate PSS location, and the proposal of possible restrictions for transfer capability in order to avoid poorly damped oscillations in the face of the diversity in power system operation.

133 citations


Proceedings ArticleDOI
27 Jul 2009
TL;DR: Three variations on the Kolmogorov-Smirnov test for multi-dimensional data sets are surveyed and it is proved that Cooke’s algorithm runs in O(n2), contrary to his claims that it runs inO(n lgn).
Abstract: Goodness-of-fit statistics measure the compatibility of random samples against some theoretical probability distribution function. The classical one-dimensional Kolmogorov-Smirnov test is a non-parametric statistic for comparing two empirical distributions which defines the largest absolute difference between the two cumulative distribution functions as a measure of disagreement. Adapting this test to more than one dimension is a challenge because there are 2d −1 independent ways of defining a cumulative distribution function when d dimensions are involved. In this paper three variations on the Kolmogorov-Smirnov test for multi-dimensional data sets are surveyed: Peacock’s test [1] that computes in O(n3); Fasano and Franceschini’s test [2] that computes in O(n2); Cooke’s test that computes in O(n2). We prove that Cooke’s algorithm runs in O(n2), contrary to his claims that it runs in O(n lgn). We also compare these algorithms with ROOT’s version of the Kolmogorov-Smirnov test.

126 citations


Journal ArticleDOI
TL;DR: In this article, a logistic approximation to the cumulative normal distribution was proposed, which has a simpler functional form and gives higher accuracy, with the maximum error of less than 0.00014 for the entire range.
Abstract: This paper develops a logistic approximation to the cumulative normal distribution. Although the literature contains a vast collection of approximate functions for the normal distribution, they are very complicated, not very accurate, or valid for only a limited range. This paper proposes an enhanced approximate function. When comparing the proposed function to other approximations studied in the literature, it can be observed that the proposed logistic approximation has a simpler functional form and that it gives higher accuracy, with the maximum error of less than 0.00014 for the entire range. This is, to the best of the authors’ knowledge, the lowest level of error reported in the literature. The proposed logistic approximate function may be appealing to researchers, practitioners and educators given its functional simplicity and mathematical accuracy.

Journal ArticleDOI
TL;DR: A nonparametric approach based on local linear fitting is advocated and the critical role of the bandwidth is identified, and its optimum value is estimated by a cross-validation procedure.
Abstract: A subject's response to the strength of a stimulus is described by the psychometric function, from which summary measures, such as a threshold or a slope, may be derived. Traditionally, this function is estimated by fitting a parametric model to the experimental data, usually the proportion of successful trials at each stimulus level. Common models include the Gaussian and Weibull cumulative distribution functions. This approach works well if the model is correct, but it can mislead if not. In practice, the correct model is rarely known. Here, a nonparametric approach based on local linear fitting is advocated. No assumption is made about the true model underlying the data, except that the function is smooth. The critical role of the bandwidth is identified, and its optimum value is estimated by a cross-validation procedure. As a demonstration, seven vision and hearing data sets were fitted by the local linear method and by several parametric models. The local linear method frequently performed better and never worse than the parametric ones. Supplemental materials for this article can be downloaded from app.psychonomic-journals.org/content/supplemental.

Journal ArticleDOI
01 Jan 2009
TL;DR: In this article, the authors present a design methodology to determine the optimal design of time-dependent, multi-response systems, by minimizing the cost during the life of the product.
Abstract: Reliability is an important engineering requirement for consistently delivering acceptable product performance through time. As time progresses, the product may fail due to time phenomena such as time-dependent operating conditions, component degradation, etc. The degradation of reliability with time may increase the lifecycle cost due to potential warranty costs, repairs and loss of market share. In design for lifecycle cost, we must account for product quality, and time-dependent reliability. Quality is a measure of our confidence that the product conforms to specifications as it leaves the factory. Reliability depends on 1) the probability that the system will perform its intended function successfully for a specified interval of time (no hard failure), and 2) on the probability that the system response will not exceed an objectionable by the customer or operator, threshold for a certain time period (no soft failure). Quality is time-independent, and reliability is time-dependent. This article presents a design methodology to determine the optimal design of time-dependent, multi-response systems, by minimizing the cost during the life of the product. The conformance of multiple responses is treated in a series-system fashion. The lifecycle cost includes a production, an inspection, and an expected variable cost. All costs depend on quality and/or reliability. The key to our approach is the calculation of the so-called system cumulative distribution function (time-dependent probability of failure). For that we use an equivalent time-invariant “composite” limit state which is accurate for monotonic or non-monotonic in time, systems. Examples highlight the calculation of the cumulative distribution function and the design methodology for lifecycle cost.Copyright © 2009 by ASME

Proceedings ArticleDOI
26 Apr 2009
TL;DR: A performance analysis for cooperative diversity system with best relay selection over Rayleigh fading channels is presented and the performances of different cases are evaluated and compared to show the significant advantages of the relay selection in a cooperative communication.
Abstract: A performance analysis for cooperative diversity system with best relay selection over Rayleigh fading channels is presented. We obtain analytical expressions for the probability density function (PDF), cumulative density function (CDF), and the moment generating function (MGF) of end-to-end SNR of the system under study. Using these expressions we derive closed- form expressions for the average symbol error rate (SER), the outage probability and the average end-to-end SNR gain obtained form relay selection. Using numerical simulations and calculation of the mathematical expressions, the performances of different cases are evaluated and compared to show the significant advantages of the relay selection in a cooperative communication.

Journal ArticleDOI
TL;DR: In this paper, the Weibull-gamma (WG) distribution is used for modeling fading environments when multipath is superimposed on shadowing, and the probability density, cumulative distribution, characteristic functions and the moments are derived in closed form.
Abstract: The Weibull-gamma (WG) distribution, which is appropriate for modelling fading environments when multipath is superimposed on shadowing, is introduced and studied. For this composite distribution the probability density, cumulative distribution, characteristic functions and the moments are derived in closed form. Furthermore, the average bit error and outage probabilities of a receiver operating over WG fading channels are assessed and compared with the corresponding performances obtained using other composite distributions.

Proceedings ArticleDOI
28 Jun 2009
TL;DR: The outage probability in multiuser beamforming is evaluated using the approach to provide an application of indefinite forms in communications and how the approach can be extended to other scenarios such as the joint distribution of quadratic forms and ratios of such forms.
Abstract: In this work, we propose a transparent approach to evaluating the CDF of indefinite quadratic forms in Gaussian random variables and ratios of such forms. This quantity appears in the analysis of different receivers in communication systems and in various applications in signal processing. Instead of attempting to find the pdf of this quantity as is the case in many papers in literature, we focus on finding the CDF. The basic trick that we implement is to replace inequalities that appear in the CDF calculations with the unit step function and replace the latter with its Fourier transform. This produces a multi-dimensional integral that can be evaluated using complex integration. We show how our approach extends to nonzero mean Gaussian real/complex vectors and to the joint distribution of indefinite quadratic forms.

Journal ArticleDOI
TL;DR: In this article, the authors provide a comprehensive description of the mathematical properties of a random variable X. The properties derived include the cumulative distribution function, the nth moment (including expressions for the first ten moments), nth central moment, moment generating function, characteristic function, mean deviation about the median, Renyi entropy, Shannon entropy, order statistics, estimation by the methods of moments and maximum likelihood, the associated Fisher information matrix and simulation issues.
Abstract: A random variable X is said to have Azzalini’s skew-logistic distribution if its pdf is f(x)=2g(x)G(λx), where g(⋅) and G(⋅), respectively, denote the pdf and cdf of the logistic distribution. This distribution—in spite of its simplicity—appears not to have been studied in detail. In this note, we provide a comprehensive description of the mathematical properties of X. The properties derived include the cumulative distribution function, the nth moment (including expressions for the first ten moments), the nth central moment, moment generating function, characteristic function, mean deviation about the mean, mean deviation about the median, Renyi entropy, Shannon entropy, order statistics, the asymptotic distribution of the extreme order statistics, estimation by the methods of moments and maximum likelihood, the associated Fisher information matrix and simulation issues. An application to logistic regression is discussed.

Journal ArticleDOI
TL;DR: A piecewise-linear path-following method for kernel-based quantile regression that enables us to estimate the cumulative distribution function of p(y x) in piece wise-linear form for all x in the input domain.
Abstract: The goal of regression analysis is to describe the stochastic relationship between an input vector x and a scalar output y. This can be achieved by estimating the entire conditional density p(y ∣ x). In this letter, we present a new approach for nonparametric conditional density estimation. We develop a piecewise-linear path-following method for kernel-based quantile regression. It enables us to estimate the cumulative distribution function of p(y ∣ x) in piecewise-linear form for all x in the input domain. Theoretical analyses and experimental results are presented to show the effectiveness of the approach.

Book
21 Sep 2009
TL;DR: Prelude Approach Philosophy Four Basic Principles I Foundations Two Motivating Examples Yield Improvement in a Chemical Process Quality Assurance in a Glass Sheet Manufacturing Process Outline of a Systematic Approach Random Phenomena, Variability, and Uncertainty Two Extreme Idealizations of Natural Phenomenas Random Mass PhenomenA Introducing Probability The Probabilistic Framework II Probability Fundamentals of Probability Theory Building Blocks Operations Probability Conditional Probability Independence Random Variables and Distributions Distributions Mathematical Expectation Characterizing Distributions Special Derived Probability Functions Multid
Abstract: Prelude Approach Philosophy Four Basic Principles I Foundations Two Motivating Examples Yield Improvement in a Chemical Process Quality Assurance in a Glass Sheet Manufacturing Process Outline of a Systematic Approach Random Phenomena, Variability, and Uncertainty Two Extreme Idealizations of Natural Phenomena Random Mass Phenomena Introducing Probability The Probabilistic Framework II Probability Fundamentals of Probability Theory Building Blocks Operations Probability Conditional Probability Independence Random Variables and Distributions Distributions Mathematical Expectation Characterizing Distributions Special Derived Probability Functions Multidimensional Random Variables Distributions of Several Random Variables Distributional Characteristics of Jointly Distributed Random Variables Random Variable Transformations Single Variable Transformations Bivariate Transformations General Multivariate Transformations Application Case Studies I: Probability Mendel and Heredity World War II Warship Tactical Response Under Attack III Distributions Ideal Models of Discrete Random Variables The Discrete Uniform Random Variable The Bernoulli Random Variable The Hypergeometric Random Variable The Binomial Random Variable Extensions and Special Cases of the Binomial Random Variable The Poisson Random Variable Ideal Models of Continuous Random Variables Gamma Family Random Variables Gaussian Family Random Variables Ratio Family Random Variables Information, Entropy, and Probability Models Uncertainty and Information Entropy Maximum Entropy Principles for Probability Modeling Some Maximum Entropy Models Maximum Entropy Models from General Expectations Application Case Studies II: In-Vitro Fertilization In-Vitro Fertilization and Multiple Births Probability Modeling and Analysis Binomial Model Validation Problem Solution: Model-Based IVF Optimization and Analysis Sensitivity Analysis IV Statistics Introduction to Statistics From Probability to Statistics Variable and Data Types Graphical Methods of Descriptive Statistics Numerical Descriptions Sampling The Distribution of Functions of Random Variables Sampling Distribution of the Mean Sampling Distribution of the Variance Estimation Criteria for Selecting Estimators Point Estimation Methods Precision of Point Estimates Interval Estimates Bayesian Estimation Hypothesis Testing Basic Concepts Concerning Single Mean of a Normal Population Concerning Two Normal Population Means Determining ss, Power, and Sample Size Concerning Variances of Normal Populations Concerning Proportions Concerning Non-Gaussian Populations Likelihood Ratio Tests Discussion Regression Analysis Simple Linear Regression "Intrinsically" Linear Regression Multiple Linear Regression Polynomial Regression Probability Model Validation Probability Plots Chi-Squared Goodness-of-Fit Test Nonparametric Methods Single Population Two Populations Probability Model Validation A Comprehensive Illustrative Example Design of Experiments Analysis of Variance Single Factor Experiments Two-Factor Experiments General Multi-factor Experiments 2k Factorial Experiments and Design Screening Designs: Fractional Factorial Screening Designs: Plackett-Burman 1Response Surface Methodology Introduction to Optimal Designs Application Case Studies III: Statistics Prussian Army Death-by-Horse Kicks WW II Aerial Bombardment of London US Population Dynamics: 1790-2000 Process Optimization V Applications Reliability and Life Testing System Reliability System Lifetime and Failure-Time Distributions The Exponential Reliability Model The Weibull Reliability Model Life Testing Quality Assurance and Control Acceptance Sampling Process and Quality Control Chemical Process Control Process and Parameter Design Introduction to Multivariate Analysis Multivariate Probability Models Multivariate Data Analysis Principal Components Analysis Appendix Index

Journal ArticleDOI
TL;DR: In this paper, the conditional distribution of the precipitation sequence at a site, given the daily atmospheric (large-scale) variable sequence, is modeled as a linear chain CRF, and the model is used to project the future cumulative distribution function of precipitation.
Abstract: Downscaling to station-scale hydrologic variables from large-scale atmospheric variables simulated by general circulation models (GCMs) is usually necessary to assess the hydrologic impact of climate change. This work presents CRF-downscaling, a new probabilistic downscaling method that represents the daily precipitation sequence as a conditional random field (CRF). The conditional distribution of the precipitation sequence at a site, given the daily atmospheric (large-scale) variable sequence, is modeled as a linear chain CRF. CRFs do not make assumptions on independence of observations, which gives them flexibility in using high-dimensional feature vectors. Maximum likelihood parameter estimation for the model is performed using limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization. Maximum a posteriori estimation is used to determine the most likely precipitation sequence for a given set of atmospheric input variables using the Viterbi algorithm. Direct classification of dry/wet days as well as precipitation amount is achieved within a single modeling framework. The model is used to project the future cumulative distribution function of precipitation. Uncertainty in precipitation prediction is addressed through a modified Viterbi algorithm that predicts the n most likely sequences. The model is applied for downscaling monsoon (June-September) daily precipitation at eight sites in the Mahanadi basin in Orissa, India, using the MIROC3.2 medium-resolution GCM. The predicted distributions at all sites show an increase in the number of wet days, and also an increase in wet day precipitation amounts. A comparison of current and future predicted probability density functions for daily precipitation shows a change in shape of the density function with decreasing probability of lower precipitation and increasing probability of higher precipitation.

Journal ArticleDOI
TL;DR: In this paper, the performance of the dual-hop semi-blind amplify-and-forward relay channel, where the intermediate relay node is selected depending on the instantaneous and partial channel knowledge, is investigated.
Abstract: The performance of the dual-hop semi-blind amplify-and-forward relay channel, where the intermediate relay node is selected depending on the instantaneous and partial channel knowledge, is investigated. Using closed-form expressions for the cumulative distribution function of the end-to-end signal-to-noise ratio (SNR), outage probability, SNR moments and average bit error rate are derived.

Journal ArticleDOI
TL;DR: In this paper, a probabilistic method is developed to optimize the design of an idealized composite wing through consideration of the uncertainties in the material properties, fiber-direction angle, and ply thickness.
Abstract: A probabilistic method is developed to optimize the design of an idealized composite wing through consideration of the uncertainties in the material properties, fiber-direction angle, and ply thickness. The polynomial chaos expansion method is used to predict the mean, variance, and probability density function of the flutter speed, making use of an efficient Latin hypercube sampling technique. One-dimensional, two-dimensional, and three-dimensional polynomial chaos expansions are introduced into the probabilistic flutter model for different combinations of material, fiber-direction-angle, and ply-thickness uncertainties. The results are compared with Monte Carlo simulation and it is found that the probability density functions obtained using second- and third-order polynomial chaos expansion models compare well but require much less computation. A reliability criterion is defined, indicating the probability of failure due to flutter, and is used to determine successfully the optimal robust design of the composite wing.

Journal ArticleDOI
TL;DR: A receive antenna selection multiple-input-multiple-output (MIMO) system, where only one out of N r receive antennas is selected, is considered, and it is demonstrated that the considered antenna selection system achieves a full diversity order that is similar to a MIMO system without antenna selection.
Abstract: We consider a receive antenna selection multiple-input-multiple-output (MIMO) system, where only one out of N r receive antennas is selected. Spatial channel correlation will be considered at the receiver side only. Our goal is to investigate the capacity performance of this system. In particular, we will derive a closed-form expression for the outage probability and an upper bound for the ergodic (average) capacity. These quantities will be expressed using an infinite series representation. To do so, we will first derive the joint cumulative distribution function and the joint probability density function of the squared row norms of the channel matrix. This is enabled by taking advantage of the statistical properties of multivariate chi-squared random variables. Using the outage expression derived, we demonstrate that the considered antenna selection system achieves a full diversity order that is similar to a MIMO system without antenna selection. Next, we derive the probability density function of the maximum of the squared row norms of the channel matrix and its moments, which is straightly related to the system ergodic capacity. We also analyze the error rate performance of the aforementioned receive antenna selection MIMO system while using orthogonal space-time block codes (OSTBCs) at the transmitter. Our simulation results are shown to validate our analytical findings.

Proceedings ArticleDOI
01 Dec 2009
TL;DR: The resulting deterministic approximation of Gaussian densities by means of discrete samples provides the basis for new types ofGaussian filters for estimating the state of nonlinear dynamic systems from noisy measurements.
Abstract: For the optimal approximation of multivariate Gaussian densities by means of Dirac mixtures, i.e., by means of a sum of weighted Dirac distributions on a continuous domain, a novel systematic method is introduced. The parameters of this approximate density are calculated by minimizing a global distance measure, a generalization of the well-known Cramervon Mises distance to the multivariate case. This generalization is obtained by defining an alternative to the classical cumulative distribution, the Localized Cumulative Distribution (LCD). In contrast to the cumulative distribution, the LCD is unique and symmetric even in the multivariate case. The resulting deterministic approximation of Gaussian densities by means of discrete samples provides the basis for new types of Gaussian filters for estimating the state of nonlinear dynamic systems from noisy measurements.

Proceedings ArticleDOI
17 May 2009
TL;DR: The probability density function and the cumulative distribution function of the product of shifted exponential variates are obtained in terms of the generalized upper incomplete Fox's H function and analytical and simulation results are in perfect agreement.
Abstract: The probability density function and the cumulative distribution function of the product of shifted exponential variates are obtained in terms of the generalized upper incomplete Fox's H function. Using these new results, the exact outage capacity of multi carrier transmission through a slow Rayleigh fading channel is presented. Moreover, it is shown that analytical and simulation results are in perfect agreement.

Journal ArticleDOI
TL;DR: In this article, a closed-form analytical expression for the cumulative distribution function of the Hoyt distribution in terms of the symmetric difference of two Marcum Q functions with linear arguments is derived.
Abstract: A closed-form analytical expression is derived for the cumulative distribution function of the Hoyt distribution in terms of the symmetric difference of two Marcum Q functions with linear arguments. This expression finds applicability in the derivation of several performance metrics of wireless communications systems under Hoyt fading.

Journal ArticleDOI
TL;DR: In this paper, a random set, a generalization of a random variable is employed to formalize expert knowledge, and fuzzy sets are used to propagate this uncertainty to model estimates of contaminant transport.
Abstract: [1] The characterization of aleatory hydrogeological parameter uncertainty has traditionally been accomplished using probability theory. However, when consideration is given to epistemic as well as aleatory uncertainty, probability theory is not necessarily appropriate. This is especially the case where expert opinion is regarded as a suitable source of information. When experts opine upon the uncertainty of a parameter value, both aleatoric and epistemic uncertainties are introduced and must be modeled appropriately. A novel approach to expert-provided parameter uncertainty characterization can be defined that bridges an historical gap between probability theory and fuzzy set theory. Herein, a random set, a generalization of a random variable is employed to formalize expert knowledge, and fuzzy sets are used to propagate this uncertainty to model estimates of contaminant transport. The resultant random set-based concentration estimates are shown to be more general than the corresponding random variable estimates. In some cases, the random set-based results are shown as upper and lower probabilities that bound the corresponding random variable's cumulative distribution function.

Journal ArticleDOI
TL;DR: A new family of copulas is introduced that provides flexible dependence structure while being tractable and simple to use for multivariate discrete data modeling and the cumulative distribution function is simply the product of univariate normal distributions.

Journal ArticleDOI
TL;DR: In this paper, a review of the most common rain-rate cumulative distribution conversion methods is presented, together with a complementary set of coefficients for regional and global application by performing regression to a measurements database.
Abstract: The conversion of rain-rate cumulative distributions from any integration time, T, to one minute is a viable option whenever local one-minute data (time series or cumulative distribution functions) are not available for microwave system design. This paper reviews some of the most common rain-rate cumulative-distribution conversion methods. For selected models, it provides a complementary set of coefficients for regional and global application by performing regression to a measurements database. The performance of each model is analyzed, together with its adaptability to various climatic regions. Finally, recommendations with regard to the global applicability of models are given.

Posted Content
TL;DR: In this article, the authors investigate a technique for introducing skewness or kurtosis into a symmetric or other distribution using a "transmutation map" which is the functional composition of the cumulative distribution function of one distribution with the inverse cumulative distribution (quantile) function of another.
Abstract: Motivated by the need for parametric families of rich and yet tractable distributions in financial mathematics, both in pricing and risk management settings, but also considering wider statistical applications, we investigate a novel technique for introducing skewness or kurtosis into a symmetric or other distribution We use a "transmutation" map, which is the functional composition of the cumulative distribution function of one distribution with the inverse cumulative distribution (quantile) function of another In contrast to the Gram-Charlier approach, this is done without resorting to an asymptotic expansion, and so avoids the pathologies that are often associated with it Examples of parametric distributions that we can generate in this way include the skew-uniform, skew-exponential, skew-normal, and skew-kurtotic-normal