scispace - formally typeset
Search or ask a question

Showing papers on "Cumulative distribution function published in 2016"


Posted Content
TL;DR: The third-degree stochastic dominance condition was introduced in this article, where the authors show that the set of probability distributions that can be ordered by means of second-degree Stieltjes dominance is, in general, larger than that which can be order by first-degree SDE.
Abstract: Here F(x) and G(x) are less-than cumulative probability distributionis where x is a continuous or discrete random variable representing the outcome of a prospect. The closed interval [a, b] is the sample space of both prospects. The integral shown in Rule 2 and those shown throughout the paper are Stieltjes integrals. Recall that the Stieltjes integral fb f(x)dg(x) exists if one of the functions f and g is continuous and the other has finite variation in [a, b]. Let D1, D2, and D3 be three sets of utility functions ?(x). D1 is the set containing all utility functions with 4(x) and +1(x) continuous, and 41(x) >0 for all xE[a, b]. D2 is the set with ?(x), ?1(x), ?2(x) continuous, and q$j(x)>0, 02(x)?O for all xC[a, b]. D3 is the set with ?(x), ?1(x), ?2(X), ?3(X) continuous, and +1(x) > 04 2(x) O O for all xC[a, b]. Here +1(x) denotes the ith derivative of +(x). Hadar and Russell proved that Rule 1 is valid for all ,CD1 and Rutle 2 is valid for all ED2. The authors point out that the set of probability distributions that can be ordered by means of second-degree stochastic dominance is, in general, larger than that which can be ordered by means of first-degree stochastic dominance. Note that in Rule 2, they assume that +(x) is not only an increasing function of x but also exhibits weak global risk aversion, a condition guaranteed by requiring the second derivative of ?(x) to be nonpositive. In this paper, a condition which will be called third-degree stochastic dominance is considered. It is based on the following assumption about the form of the utility function ?(x). From a normative point of view, one expects the risk premium associated with an uncertain prospect to become smaller the greater is the individual's wealth. The plausibility and implications of this assumption h'ave been explored by John Pratt, as well as others. The risk premium of an uncertain prospect is that amount by which the certainty equivalent of the prospect differs from its expected value. In mathematical terms, given the prospect F(x) with expected value A, the corresponding risk premium -t is obtained by solving the following equation. rb

537 citations


Journal ArticleDOI
TL;DR: In this article, a unified performance analysis of a single-link free-space optical (FSO) link that accounts for pointing errors and both types of detection techniques is presented.
Abstract: In this work, we present a unified performance analysis of a free-space optical (FSO) link that accounts for pointing errors and both types of detection techniques [i.e., intensity modulation/direct detection (IM/DD) and heterodyne detection]. More specifically, we present unified exact closed-form expressions for the cumulative distribution function, the probability density function, the moment generating function, and the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system, all in terms of the Meijer’s G function except for the moments that is in terms of simple elementary functions. We then capitalize on these unified results to offer unified exact closed-form expressions for various performance metrics of FSO link transmission systems, such as the outage probability, the scintillation index (SI), the average error rate for binary and $M$ -ary modulation schemes, and the ergodic capacity (except for IM/DD technique, where we present closed-form lower bound results), all in terms of Meijer’s G functions except for the SI that is in terms of simple elementary functions. Additionally, we derive the asymptotic results for all the expressions derived earlier in terms of Meijer’s G function in the high SNR regime in terms of simple elementary functions via an asymptotic expansion of the Meijer’s G function. We also derive new asymptotic expressions for the ergodic capacity in the low as well as high SNR regimes in terms of simple elementary functions via utilizing moments. All the presented results are verified via computer-based Monte-Carlo simulations.

273 citations


Journal ArticleDOI
TL;DR: A framework is developed to enable a much more extensive comparison between approximate methods for computing the cdf of weighted sums of an arbitrary random variable and a surprising result of this analysis is that the accuracy of these approximate methods increases with N.
Abstract: In many applications, the cumulative distribution function (cdf) $$F_{Q_N}$$FQN of a positively weighted sum of N i.i.d. chi-squared random variables $$Q_N$$QN is required. Although there is no known closed-form solution for $$F_{Q_N}$$FQN, there are many good approximations. When computational efficiency is not an issue, Imhof's method provides a good solution. However, when both the accuracy of the approximation and the speed of its computation are a concern, there is no clear preferred choice. Previous comparisons between approximate methods could be considered insufficient. Furthermore, in streaming data applications where the computation needs to be both sequential and efficient, only a few of the available methods may be suitable. Streaming data problems are becoming ubiquitous and provide the motivation for this paper. We develop a framework to enable a much more extensive comparison between approximate methods for computing the cdf of weighted sums of an arbitrary random variable. Utilising this framework, a new and comprehensive analysis of four efficient approximate methods for computing $$F_{Q_N}$$FQN is performed. This analysis procedure is much more thorough and statistically valid than previous approaches described in the literature. A surprising result of this analysis is that the accuracy of these approximate methods increases with N.

70 citations


Journal ArticleDOI
TL;DR: The analysis quantifies the effects of the mobility and propagation environments, which are characterized by the path-loss exponent and Nakagami-m parameter for the desired and interfering signals, on the performance of a mobile receiver.
Abstract: The outage probability (OP) and average bit error rate (BER) of wireless digital systems in a Nakagami- $m$ fading environment have been well analyzed in the literature. Most of the analysis considers static wireless terminals in which the received signal power follows a gamma distribution. However, many mobile and ad hoc networks are composed of mobile receiving nodes with random mobility patterns. In this paper, we consider 1-D, 2-D, and 3-D wireless network deployment topologies with the random waypoint (RWP) mobility model. In such systems, the received power does not follow the gamma distribution in the presence of Nakagami- $m$ fading. We derive the probability density function (pdf) and the cumulative distribution function (cdf) of the received signal power for a mobile node. Consequently, the OP and the average BER of general modulation schemes are derived. Moreover, the effect of co-channel interference on the OP is investigated for interference-limited and interference-plus-noise systems. The analysis quantifies the effects of the mobility and propagation environments, which are characterized by the path-loss exponent and Nakagami- $m$ parameter for the desired and interfering signals, on the performance of a mobile receiver.

59 citations


Journal ArticleDOI
Ping Wang1, Ranran Wang1, Lixin Guo1, Tian Cao1, Yintang Yang1 
TL;DR: In this article, the average bit error rate (ABER) and outage performances of decode-and-forward (DF) based multi-hop parallel free-space optical (FSO) communication system with the combined effects of path loss, pointing errors (i.e., misalignment fading), and atmospheric turbulence-induced fading modeled by M distribution have been investigated in detail.

57 citations


Journal ArticleDOI
TL;DR: In this paper, a second-order polynomial is proposed to approximate the nonlinear relationship between the wind generation and the damping of a particular dynamic mode, such as the dominant mode.
Abstract: Wind generation is growing fast worldwide. The stochastic variation of large-scale wind generation may impact the power systems in almost every aspect. Probabilistic analysis method is an effective tool to study power systems with random factors. In this paper, a systematic nonlinear analytical probabilistic method is proposed to evaluate the possible effect of random wind power generation on power system small signal stability. A second-order polynomial is proposed to approximate the nonlinear relationship between the wind generation and the damping of a particular dynamic mode, such as the dominant mode. Gaussian mixture model formulates wind uncertainty in a uniform way. Spectral theorem is adopted to reshape the second-order polynomial into a form without cross-product terms. Cholesky decomposition is used to eliminate correlations among outputs of different wind farms. Thereafter the cumulative distribution function (CDF) of the damping ratio with respect to random wind power is consequently constructed. Numerical simulations are carried out in the IEEE standard test system. The proposed method is verified with higher accuracy than the traditional linearized method. Meanwhile, it is much more time-saving in calculation than Monte Carlo simulation.

53 citations


Journal ArticleDOI
TL;DR: In this paper, a numerical-based algorithm to solve the probabilistic power flow problem is presented, where the Parzen window density estimator is used to efficiently estimate the probability distribution of power flow outputs.
Abstract: This paper presents a numerical-based algorithm to solve the probabilistic power flow problem. Parzen window density estimator is used to efficiently estimate probabilistic characteristics of power flow outputs. Correlations between wind generation, load, and plug-in hybrid electric vehicle charging stations are taken into account. The proposed algorithm works properly for random variables with various probability distribution functions and is very useful when limited information is available for each random variable. The algorithm is tested on the IEEE 14-bus and IEEE 118-bus systems considering correlated and uncorrelated conditions. Comparison between the proposed algorithm with 2n, $\text{2n} + 1$ point estimation methods as well as Monte Carlo simulation and linear diffusion method are provided. In addition, probability density and cumulative distribution functions are determined using the proposed algorithm, diffusion method, and the combined Cumulants and Gram-Charlier for $\text{2n} + 1$ point estimation method. Error indices are introduced to evaluate all random variables in a single benchmark. Simulation results show the effectiveness of the proposed algorithm to provide complete statistical information for probabilistic power flow outputs.

51 citations


Journal ArticleDOI
TL;DR: By applying the Random Variable Transformation technique, the first probability density function, the mean and the variance functions, as well as confidence intervals associated with the solution of SIS-type epidemiological models, are determined.

47 citations


Journal ArticleDOI
TL;DR: Analytical expressions for the joint probability density function, cumulative distribution function, and moment generation function of the multivariate ΓΓ distribution with arbitrary correlation of radio frequency and optical wireless communication systems are provided.
Abstract: The statistical properties of the multivariate gamma–gamma $(\Gamma\Gamma)$ distribution with arbitrary correlation have remained unknown. In this paper, we provide analytical expressions for the joint probability density function (pdf), cumulative distribution function (cdf), and moment generation function (mgf) of the multivariate $\Gamma\Gamma$ distribution with arbitrary correlation. Furthermore, we present novel approximating expressions for the pdf and cdf of the sum of $\Gamma\Gamma$ random variables (RVs) with arbitrary correlation. Based on this statistical analysis, we investigate the performance of radio frequency (RF) and optical wireless communication systems. It is worth noting that the presented expressions include several previous results in the literature as special cases.

45 citations


Journal ArticleDOI
Junxing Li1, Zhihua Wang1, Xia Liu, Yongbo Zhang1, Huimin Fu1, Chengrui Liu 
TL;DR: A Wiener process model simultaneously incorporating temporal variability, individual variation and measurement errors is proposed to analyze the accelerated degradation test (ADT) and the maximum likelihood estimations of the model parameters are obtained.

41 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derived a deterministic equation for the CDF of solute concentration, which accounts for uncertainty in flow velocity and initial conditions, and compared their predictions with those obtained with Monte Carlo simulations and an assumed beta CDF.
Abstract: Predictions of solute transport in subsurface environments are notoriously unreliable due to aquifer heterogeneity and uncertainty about the values of hydraulic parameters. Probabilistic framework, which treats the relevant parameters and solute concentrations as random fields, allows for quantification of this predictive uncertainty. By providing deterministic equations for either probability density function or cumulative distribution function (CDF) of predicted concentrations, the method of distributions enables one to estimate, e.g., the probability of a contaminant's concentration exceeding a safe dose. We derive a deterministic equation for the CDF of solute concentration, which accounts for uncertainty in flow velocity and initial conditions. The coefficients in this equation are expressed in terms of the mean and variance of concentration. The accuracy and robustness of the CDF equations are analyzed by comparing their predictions with those obtained with Monte Carlo simulations and an assumed beta CDF.

Journal ArticleDOI
04 Nov 2016
TL;DR: New original algorithmic implementations of methods for numerical inversion of the characteristic function which are especially suitable for typical metrological applications are proposed and are an efficient alternative to the standard Monte Carlo methods.
Abstract: Measurement uncertainty analysis based on combining the state-of-knowledge distributions requires evaluation of the probability density function (PDF), the cumulative distribution function (CDF), and/or the quantile function (QF) of a random variable reasonably associated with the measurand. This can be derived from the characteristic function (CF), which is defined as a Fourier transform of its probability distribution function. Working with CFs provides an alternative and frequently much simpler route than working directly with PDFs and/or CDFs. In particular, derivation of the CF of a weighted sum of independent random variables is a simple and trivial task. However, the analytical derivation of the PDF and/or CDF by using the inverse Fourier transform is available only in special cases. Thus, in most practical situations, a numerical derivation of the PDF/CDF from the CF is an indispensable tool. In metrological applications, such approach can be used to form the probability distribution for the output quantity of a measurement model of additive, linear or generalized linear form. In this paper we propose new original algorithmic implementations of methods for numerical inversion of the characteristic function which are especially suitable for typical metrological applications. The suggested numerical approaches are based on the Gil-Pelaez inverse formulae and on using the approximation by discrete Fourier transform and the fast Fourier transform (FFT) algorithm for computing PDF/CDF of the univariate continuous random variables. As illustrated here, for typical metrological applications based on linear measurement models the suggested methods are an efficient alternative to the standard Monte Carlo methods.

Journal ArticleDOI
TL;DR: This study carries out the performance analysis of an asymmetric dual hop relay system composed of both radio-frequency (RF) and free-space optical (FSO) links and provides the new closed-form expressions of outage probability and the ergodic channel capacity.
Abstract: In this study, the authors carry out the performance analysis of an asymmetric dual hop relay system composed of both radio-frequency (RF) and free-space optical (FSO) links. The RF link is subject to generalised η - μ distribution, while the channel for FSO link is modelled as gamma-gamma distribution. The decode-and-forward relaying phenomena is used, where the relay decodes the received RF signal from the source and converts it into an optical signal using the sub-carrier intensity-modulation (SIM) scheme for transmission over the FSO link. The FSO link is subjected to pointing errors and account for both types of detection techniques, i.e. IM/DD and heterodyne detection. Novel exact closed-form expressions for the probability density function and cumulative distribution function of the equivalent end-to-end signal-to-noise ratio of the mixed RF/FSO system in terms of Meijer's G function are derived. Capitalising on these derived channel statistics, they provide the new closed-form expressions of outage probability and the ergodic channel capacity. They also provide the average bit-error rate for different binary modulations. Furthermore, the Monte Carlo simulations validate the analytical results.

Journal ArticleDOI
TL;DR: It is shown that an effective computational version of the adaptive last particle method where trajectories are no more i.i.d. follows the same statistics than the idealized version when the reaction coordinate is the committor function.
Abstract: The adaptive last particle method is a simple and interesting alternative in the class of general splitting algorithms for estimating tail distributions. We consider this algorithm in the space of trajectories and for general reaction coordinates. Using a combinatorial approach in discrete state spaces, we demonstrate two new results. First, we are able to give the exact expression of the distribution of the number of iterations in an perfect version of the algorithm where trajectories are i.i.d. This result is an improvement of previous known results when the cumulative distribution function has discontinuities. Second, we show that an effective computational version of the algorithm where trajectories are no more i.i.d. follows the same statistics than the idealized version when the reaction coordinate is the committor function.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a fading model that accounts for virtually all relevant short-term propagation phenomena described in the literature, such as nonlinearity of the medium, power of the scattered waves, and number of multipath clusters.
Abstract: A fading model—the $\alpha $ - $\eta $ - $\kappa $ - $\mu $ model—is proposed that accounts for virtually all relevant short-term propagation phenomena described in the literature. These phenomena include nonlinearity of the medium, power of the scattered waves, power of the dominant components, and number of multipath clusters. They are mapped onto physical parameters, and apart from the first one, which is assumed to influence only the resulting signal envelope, the others affect independently its in-phase and quadrature components. Imbalance parameters are then introduced so as to better assess the impact of these phenomena on the entire process. The signal is described by means of its envelope and phase probability density functions (pdfs). To this end, complex-based and envelope-based models are proposed and explored. An exact joint envelope-phase pdf is found in the closed form. The marginal statistics in these cases remain in the integral forms. Interestingly, the envelope density function obtained through the envelope-based model is found to compute more efficiently than that obtained through the complex model. In addition, capitalizing on the results in the literature, exact series-expansion formulations are found for the envelope pdf as well as for the envelope cumulative distribution function. To the best of the author’s knowledge, this is probably the most general and unifying, physically based, complex fading model proposed in the literature. Because of its comprehensiveness, a number of issues remain open and constitute a fertile field for investigation.

Journal ArticleDOI
TL;DR: In this paper, a new random utility model is proposed, which is characterized by a cumulative distribution function obtained as a finite mixture of different cdfs, and the closed-form covariance expression opens up interesting application possibilities in some special choice contexts, where prior expectations in terms of the covariance matrix can be formulated.
Abstract: This paper proposes a new random utility model characterised by a cumulative distribution function (cdf) obtained as a finite mixture of different cdfs. This entails that choice probabilities, covariances and elasticities of this model are also a finite mixture of choice probabilities, covariances and elasticities of the mixing models. As a consequence, by mixing nested logit cdfs, a model is generated with closed-form expressions for choice probabilities, covariances and elasticities and with, potentially, a very flexible correlation pattern. Importantly, the closed-form covariance expression opens up interesting application possibilities in some special choice contexts, like route choice, where prior expectations in terms of the covariance matrix can be formulated.

Book ChapterDOI
02 Apr 2016
TL;DR: This work enables a new approach to this problem of reasoning about the probability of assertion violations in straight-line, nonlinear computations involving uncertain quantities modeled as random variables that explicitly avoids subdividing the domain of inputs.
Abstract: We consider the problem of reasoning about the probability of assertion violations in straight-line, nonlinear computations involving uncertain quantities modeled as random variables. Such computations are quite common in many areas such as cyber-physical systems and numerical computation. Our approach extends probabilistic affine forms, an interval-based calculus for precisely tracking how the distribution of a given program variable depends on uncertain inputs modeled as noise symbols. We extend probabilistic affine forms using the precise tracking of dependencies between noise symbols combined with the expectations and higher order moments of the noise symbols. Next, we show how to prove bounds on the probabilities that program variables take on specific values by using concentration of measure inequalities. Thus, we enable a new approach to this problem that explicitly avoids subdividing the domain of inputs, as is commonly done in the related work. We illustrate the approach in this paper on a variety of challenging benchmark examples, and thus study its applicability to uncertainty propagation.

Journal ArticleDOI
TL;DR: Two unified, yet efficient, hazard rate twisting Importance Sampling (IS) based approaches that efficiently estimate the OC of MRC or EGC diversity techniques over generalized independent fading channels are proposed.
Abstract: The outage capacity (OC) is among the most important performance metrics of communication systems operating over fading channels. Of interest in the present paper is the evaluation of the OC at the output of the Equal Gain Combining (EGC) and the Maximum Ratio Combining (MRC) receivers. In this case, it can be seen that this problem turns out to be that of computing the Cumulative Distribution Function (CDF) for the sum of independent random variables. Since finding a closed-form expression for the CDF of the sum distribution is out of reach for a wide class of commonly used distributions, methods based on Monte Carlo (MC) simulations take pride of price. In order to allow for the estimation of the operating range of small outage probabilities, it is of paramount importance to develop fast and efficient estimation methods as naive MC simulations would require high computational complexity. In this line, we propose in this work two unified, yet efficient, hazard rate twisting Importance Sampling (IS) based approaches that efficiently estimate the OC of MRC or EGC diversity techniques over generalized independent fading channels. The first estimator is shown to possess the asymptotic optimality criterion and applies for arbitrary fading models, whereas the second one achieves the well-desired bounded relative error property for the majority of the well-known fading variates. Moreover, the second estimator is shown to achieve the asymptotic optimality property under the particular Log-normal environment. Some selected simulation results are finally provided in order to illustrate the substantial computational gain achieved by the proposed IS schemes over naive MC simulations.

Posted Content
TL;DR: This paper introduces an estimator of the extreme-value index in the presence of a random covariate when the response variable is right-censored, whether its conditional distribution belongs to the Frechet, Weibull or Gumbel domain of attraction.
Abstract: In extreme value theory, the extreme-value index is a parameter that controls the behavior of a cumulative distribution function in its right tail. Estimating this parameter is thus the first step when tackling a number of problems related to extreme events. In this paper, we introduce an estimator of the extreme-value index in the presence of a random covariate when the response variable is right-censored, whether its conditional distribution belongs to the Frechet, Weibull or Gumbel domain of attraction. The pointwise weak consistency and asymptotic normality of the proposed estimator are established. Some illustrations on simulations are provided and we showcase the estimator on a real set of medical data.

Journal ArticleDOI
TL;DR: The results show that optimum power allocation improves the system performance compared with uniform power allocation and adaptive power allocation under the total-transmit-power constraint.
Abstract: In this paper, the performance of a dual-hop multiuser underlay cognitive network is thoroughly investigated by using a decode-and-forward (DF) protocol at the relay node and employing opportunistic scheduling at the destination users. A practical scenario where cochannel interference signals are present in the system is considered for the investigation. Considering that transmissions are performed over nonidentical Rayleigh fading channels, first, the exact signal-to-interference-plus-noise ratio (SINR) of the network is formulated. Then, the exact equivalent cumulative distribution function (cdf) and the outage probability of the system SINR are derived. An efficient tight approximation is proposed for the per-hop cdfs, and based on this, the closed-form expressions for the error probability and the ergodic capacity are derived. Furthermore, an asymptotic expression for the cdf of the instantaneous SINR is derived, and a simple and general asymptotic expression for the error probability is presented and discussed. Moreover, adaptive power allocation under the total-transmit-power constraint is studied to minimize the asymptotic average error probability. As expected, the results show that optimum power allocation improves the system performance compared with uniform power allocation. Finally, the theoretical analysis is validated by presenting various numerical results and Monte Carlo simulations.

Journal ArticleDOI
TL;DR: In this paper, an estimator of the extreme-value index in the presence of a random covariate when the response variable is right-censored, whether its conditional distribution belongs to the Frechet, Weibull or Gumbel domain of attraction is presented.

Journal ArticleDOI
TL;DR: The simplexreg as discussed by the authors package provides dispersion model fitting of the simplex distribution to model such proportional outcomes, which can be used for parameter estimation in cross-sectional and longitudinal studies, respectively.
Abstract: Outcomes of continuous proportions arise in many applied areas. Such data are typically measured as percentages, rates or proportions confined in the unitary interval. In this paper, the R package simplexreg which provides dispersion model fitting of the simplex distribution is introduced to model such proportional outcomes. The maximum likelihood method and generalized estimating equations techniques are available for parameter estimation in cross-sectional and longitudinal studies, respectively. This paper presents methods and algorithms implemented in the package, including parameter estimation, model checking as well as density, cumulative distribution, quantile and random number generating functions of the simplex distribution. The package is applied to real data sets for illustration.

Journal ArticleDOI
TL;DR: In this article, a statistical process control technique, known as the cumulative sum (CUSUM) chart, was used for the detection of small but persistent shifts in the high-rate GPS carrier-phase measurements.
Abstract: Timely and correctly evaluating the quality of Global Positioning System (GPS) data is essential for reduction in the number of false alarms and missed detection of a GPS-based bridge deformation monitoring system. This paper investigates how to use the statistical process control technique, known as the cumulative sum (CUSUM) chart, for the detection of small but persistent shifts in the high-rate GPS carrier-phase measurements. First, a mathematical model for the shift detection based on the continuous hypothesis testing is established. The main features and implementation procedure of the CUSUM chart for the shift detection are then summarized, and the corresponding parameter selection method is discussed in detail. To meet the normality requirement of the CUSUM chart, a novel method that transfers the data to the Q-statistic by the estimated cumulative distribution functions is proposed according to the probability integral transform theory. This is followed by a simulation carried out to evaluate the...

Proceedings ArticleDOI
22 May 2016
TL;DR: This paper proposes a novel statistic which depicts the number of errors contained in the ordered received noisy codeword and incorporates the properties of this new statistic to derive the simplified error performance bound of the OSD algorithm for all order-I reprocessing.
Abstract: In this paper, a novel simplified statistical approach to evaluate the error performance bound of Ordered Statistics Decoding (OSD) of Linear Block Codes (LBC) is investigated. First, we propose a novel statistic which depicts the number of errors contained in the ordered received noisy codeword. Then, simplified expressions for the probability mass function and cumulative distribution function are derived exploiting the implicit statistical independence property of the samples of the received noisy codeword before reordering. Second, we incorporate the properties of this new statistic to derive the simplified error performance bound of the OSD algorithm for all order-I reprocessing. Finally, with the proposed approach, we obtain computationally simpler error performance bounds of the OSD than those proposed in literature for all length LBCs.

Journal ArticleDOI
TL;DR: In this paper, the authors presented a stochastic approach for the characterization of daily residential water use, which considers a unique probabilistic distribution for any time during the day, and thus for any entity of the water demanded by the users.
Abstract: Residential water demand is a random variable which influences greatly the performance of municipal water distribution systems. The water request at network nodes reflects the behavior of the residential users, and a proper characterization of their water use habits is vital for the hydraulic system modeling. This study presents a stochastic approach for the characterization of the daily residential water use. The proposed methodology considers a unique probabilistic distribution – Mixed Distribution – for any time during the day, and thus for any entity of the water demanded by the users. This distribution is obtained by the merging of two cumulative distribution functions taking into account the spike of the cumulative frequencies for the null requests. The methodology has been tested on three real water distribution networks, where the water use habits are different. Experimental relations are given to estimate the parameters of the proposed stochastic model in relation to the users number and to the average daily trend. Numerical examples for a practical application have shown the effectiveness of the proposed approach in order to generate the time series for the residential water demand.

Journal ArticleDOI
TL;DR: An efficient expectation-maximization (EM) algorithm is proposed for accurately estimating the direction-of-arrival (DOA) of the signal from a far field source in the presence of biased TDOA measurements and results demonstrate that the proposed EM-based estimators can considerably outperform the existing algorithms.
Abstract: A sound localization system consisting of a number of spatially distributed sensors can be employed to estimate the source bearing by measuring the relative time-difference-of-arrival (TDOA) of the transient acoustic signal. However, sensor measurements may contain nonnegligible consistent systematic biases, resulting in significant direction finding estimation error. In this paper, an efficient expectation-maximization (EM) algorithm is proposed for accurately estimating the direction-of-arrival (DOA) of the signal from a far field source in the presence of biased TDOA measurements. The unknown biases are treated as hidden variables, and nonlinear least square estimators are developed to jointly estimate the biases and DOA parameters for both the reference-free mode and the reference mode. TDOA error distribution is investigated and four different distributions [Gaussian distribution, Laplace distribution, Gaussian-Laplace distribution, and generalized normal distribution (GND)] are employed to fit the experimental data. It is observed that GND is the best for modeling the biased TDOA errors in terms of cumulative distribution function fitting root mean square error, while Laplace distribution offers a good tradeoff between the accuracy and complexity. Both the simulation and field experimental results demonstrate that the proposed EM-based estimators can considerably outperform the existing algorithms.

Journal ArticleDOI
TL;DR: Newdistns as mentioned in this paper computes the probability density function, cumulative distribution function, quantile function, random numbers and some measures of inference for nineteen families of distributions, each family is flexible enough to encompass a large number of structures.
Abstract: The contributed R package Newdistns written by the authors is introduced. This package computes the probability density function, cumulative distribution function, quantile function, random numbers and some measures of inference for nineteen families of distributions. Each family is flexible enough to encompass a large number of structures. The use of the package is illustrated using a real data set. Also robustness of random number generation is checked by simulation.

Journal ArticleDOI
TL;DR: A theoretical model for estimating the probability of the consequent rank reversal using the multivariate normal cumulative distribution function is provided and the model is applied to two alternative weight extraction methods frequently used in the literature: the geometric mean and the eigenvalue method.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the Young's modulus and body force of structures as imprecise random fields with bounded statistical moments, and developed the uncertain-but-bounded statistical characteristics of responses, namely interval mean value, interval standard deviation and bounding distribution functions.

Journal ArticleDOI
TL;DR: A new sensitivity index is introduced which looks at the influence of input uncertainty on the entire distribution of the multivariate output without reference to a specific moment of the output.