scispace - formally typeset
Search or ask a question

Showing papers on "Cumulative distribution function published in 2003"


Journal ArticleDOI
TL;DR: The following techniques for uncertainty and sensitivity analysis are briefly summarized: Monte Carlo analysis, differential analysis, response surface methodology, Fourier amplitude sensitivity test, Sobol' variance decomposition, and fast probability integration.

1,780 citations


Journal ArticleDOI
TL;DR: A concise closed-form expression is derived for the characteristic function (c.f.) of MIMO system capacity with arbitrary correlation among the transmitting antennas or among the receiving antennas in frequency-flat Rayleigh-fading environments, and an exact expression for the mean value of the capacity for arbitrary correlation matrices is derived.
Abstract: In this paper, we investigate the capacity distribution of spatially correlated, multiple-input-multiple-output (MIMO) channels. In particular, we derive a concise closed-form expression for the characteristic function (c.f.) of MIMO system capacity with arbitrary correlation among the transmitting antennas or among the receiving antennas in frequency-flat Rayleigh-fading environments. Using the exact expression of the c.f., the probability density function (pdf) and the cumulative distribution function (CDF) can be easily obtained, thus enabling the exact evaluation of the outage and mean capacity of spatially correlated MIMO channels. Our results are valid for scenarios with the number of transmitting antennas greater than or equal to that of receiving antennas with arbitrary correlation among them. Moreover, the results are valid for an arbitrary number of transmitting and receiving antennas in uncorrelated MIMO channels. It is shown that the capacity loss is negligible even with a correlation coefficient between two adjacent antennas as large as 0.5 for exponential correlation model. Finally, we derive an exact expression for the mean value of the capacity for arbitrary correlation matrices.

735 citations


Journal ArticleDOI
TL;DR: This work derives generic formulas for the cumulative distribution function, probability density function, and moment-generating function of the combined signal power for both switch-and-stay combining (SSC) and switch- and-examine combining (SEC) schemes and shows that, in general, the SEC performance improves with additional branches.
Abstract: We investigate the performance of multibranch switched diversity systems. Specifically, we first derive generic formulas for the cumulative distribution function, probability density function, and moment-generating function of the combined signal power for both switch-and-stay combining (SSC) and switch-and-examine combining (SEC) schemes. We then capitalize on these expressions to obtain closed-form expressions for the outage probability and average error rate for various practical communication scenarios of interest. As a byproduct of our analysis we prove that for SSC with identically distributed and uniformly correlated branches, increasing the number of branches to more than two does not improve the performance, but the performance can be different in the case the branches are not identically distributed and/or not uniformly correlated. We also show that, in general, the SEC performance improves with additional branches. The mathematical formalism is illustrated with a number of selected numerical examples.

157 citations


Journal ArticleDOI
TL;DR: Shaked, M., Shanthikumar, J. as discussed by the authors defined some new classes of distributions based on the random variable X t and study their interrelations, and established its relationship with the reversed hazard rate ordering.
Abstract: If the random variable X denotes the lifetime (X ≥ 0, with probability one) of a unit, then the random variable X t = (t − X|X ≤ t), for a fixed t > 0, is known as `time since failure', which is analogous to the residual lifetime random variable used in reliability and survival analysis. The reversed hazard rate function, which is related to the random variable X t , has received the attention of many researchers in the recent past [(cf. Shaked, M., Shanthikumar, J. G., 1994). Stochastic Orders and Their Applications. New York: Academic Press]. In this paper, we define some new classes of distributions based on the random variable X t and study their interrelations. We also define a new ordering based on the mean of the random variable Xt and establish its relationship with the reversed hazard rate ordering.

144 citations


Journal ArticleDOI
TL;DR: An efficient approach for the evaluation of the Nakagami-m (1960) multivariate probability density function (PDF) and cumulative distribution function (CDF) with arbitrary correlation is presented and useful closed formulas are derived.
Abstract: An efficient approach for the evaluation of the Nakagami-m (1960) multivariate probability density function (PDF) and cumulative distribution function (CDF) with arbitrary correlation is presented. Approximating the correlation matrix with a Green's matrix, useful closed formulas for the joint Nakagami-m PDF and CDF, are derived. The proposed approach is a significant theoretical tool that can be efficiently used in the performance analysis of wireless communications systems operating over correlative Nakagami-m fading channels.

132 citations


Journal ArticleDOI
TL;DR: This work considers a narrow-band point-to-point communication system with many (input) transmitters and a single (output) receiver and finds that the optimal signal covariances for the ergodic and outage cases have very different behavior.
Abstract: We consider a narrow-band point-to-point communication system with many (input) transmitters and a single (output) receiver (i.e., a multiple-input single output (MISO) system). We assume the receiver has perfect knowledge of the channel but the transmitter only knows the channel distribution. We focus on two canonical classes of Gaussian channel models: (a) the channel has zero mean with a fixed covariance matrix and (b) the channel has nonzero mean with covariance matrix proportional to the identity. In both cases, we are able to derive simple analytic expressions for the ergodic average and the cumulative distribution function (c.d.f.) of the mutual information for arbitrary input (transmission) signal covariance. With minimal numerical effort, we then determine the ergodic and outage capacities and the corresponding capacity-achieving input signal covariances. Interestingly, we find that the optimal signal covariances for the ergodic and outage cases have very different behavior. In particular, under certain conditions, the outage capacity optimal covariance is a discontinuous function of the parameters describing the channel (such as strength of the correlations or the nonzero mean of the channel).

131 citations


Journal ArticleDOI
TL;DR: In this article, a new law for the three-dimensional spatial distance between the foci of successive earthquakes is reported, and the cumulative distribution of the distances follows the modified Zipf-Mandelbrot law.
Abstract: [1] Discovery of a new law for the three-dimensional spatial distance between the foci of successive earthquakes is reported. Analyzing the seismic data taken between 1984 and 2001 in southern Californian, it is found that the cumulative distribution of the distances follows the modified Zipf-Mandelbrot law, showing complexity of geometry of the events.

113 citations


Journal ArticleDOI
TL;DR: In this article, the cumulative distribution function of a multivariate normal distribution is considered and a recursive integration method is used to evaluate non-centred orthoscheme probabilities with respect to any positive definite correlation matrix and any mean vector.
Abstract: Summary. The evaluation of the cumulative distribution function of a multivariate normal distribution is considered. The multivariate normal distribution can have any positive definite correlation matrix and any mean vector. The approach taken has two stages. In the first stage, it is shown how non-centred orthoscheme probabilities can be evaluated by using a recursive integration method. In the second stage, some ideas of Schlafli and Abrahamson are extended to show that any non-centred orthant probability can be expressed as differences between at most (m- 1)! non-centred orthoscheme probabilities. This approach allows an accurate evaluation of many multivariate normal probabilities which have important applications in statistical practice.

105 citations


Journal ArticleDOI
TL;DR: In this article, various properties of Kendall distribution functions for both populations and samples are studied for both samples and populations. And the Kendall distribution function of (X, Y) is the distribution function for the random variable H(X,Y) with joint distribution function H (X and Y).

97 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider nonparametric estimation of an object such as a probability density or a regression function, and show that such estimators can achieve the rate-wise minimax rate of convergence on suitable function spaces, while, at the same time, when "plugged-in," estimate efficiently (at a rate of n − 1/2 ) with the best constant.
Abstract: We consider nonparametric estimation of an object such as a probability density or a regression function. Can such an estimator achieve the ratewise minimax rate of convergence on suitable function spaces, while, at the same time, when "plugged-in," estimate efficiently (at a rate of~$n^{-1/2}$ with the best constant) many functionals of the object? For example, can we have a density estimator whose definite integrals are efficient estimators of the cumulative distribution function? We show that this is impossible for very large sets, for example, expectations of all functions bounded by $M<\infty$. However, we also show that it is possible for sets as large as indicators of all quadrants, that is, distribution functions. We give appropriate constructions of such estimates.

91 citations


Journal ArticleDOI
TL;DR: This study presents an efficient, flexible and easily applied stochastic non-Gaussian simulation method capable of reliably converging to a target power spectral density function and marginal probability density function, or a close relative thereof.
Abstract: Methods for stochastic simulation of sample functions have increasingly addressed the preservation of both spectral and probabilistic contents to offer an accurate description of the dynamic behavior of system input for reliability analysis. This study presents an efficient, flexible and easily applied stochastic non-Gaussian simulation method capable of reliably converging to a target power spectral density function and marginal probability density function, or a close relative thereof. Several existing spectral representation-based non-Gaussian simulation algorithms are first summarized. The new algorithm is then presented and compared with these methods to demonstrate its efficacy. The advantages and limitations of the new method are highlighted and shown to complement those of the existing algorithms.

Journal ArticleDOI
TL;DR: This article demonstrates that with Hermite interpolation of the inverse CDF the authors can obtain very small error bounds close to machine precision, using the adaptive interval splitting method.
Abstract: The inversion method for generating nonuniform random variates has some advantages compared to other generation methods, since it monotonically transforms uniform random numbers into non-uniform random variates. Hence, it is the method of choice in the simulation literature. However, except for some simple cases where the inverse of the cumulative distribution function is a simple function we need numerical methods. Often inversion by "brute force" is used, applying either very slow iterative methods or linear interpolation of the CDF and huge tables. But then the user has to accept unnecessarily large errors or excessive memory requirements, that slow down the algorithm. In this article, we demonstrate that with Hermite interpolation of the inverse CDF we can obtain very small error bounds close to machine precision. Using our adaptive interval splitting method, this accuracy is reached with moderately sized tables that allow for a fast and simple generation procedure.

Patent
27 May 2003
TL;DR: In this article, a model construction module (MCM) is proposed for constructing a probabilistic model (PM) of a wireless environment (RN) in which a target device (T) communicates using signals that have a measurable signal value (x), such as signal strength.
Abstract: A model construction module (MCM) for constructing a probabilistic model (PM) of a wireless environment (RN) in which a target device (T) communicates using signals that have a measurable signal value (x), such as signal strength. The model construction module forms several submodels (611 - 631) of the wireless environment (RN). Each submodel indicates a probability distribution (F1 - F3) for signal values at one or more locations (Q1 - QY) in the wireless environment. The module combines the submodels to a probabilistic model (PM) of the wireless environment (RN), such that the probabilistic model indicates a probability distribution for signal values at several locations in the wireless environment. Alternatively, the model may insert new locations to a single model based on a combination of existing locations. The combination of submodels or existing locations comprises combining the inverse cumulative distribution functions of the submodels or existing locations.

Journal ArticleDOI
TL;DR: In this view, a stimulus becomes a CS if its onset substantially reduces the subject's uncertainty about the time of occurrence of the next US, which has two limiting forms: the state of maximal uncertainty (minimal knowledge) and the cumulative probability function.

Journal ArticleDOI
TL;DR: It is proved that the level sets of a probability density function correspond to minimum volume sets and the conditions for which the inverse proposition is verified.

Journal ArticleDOI
TL;DR: Novel expressions for the probability and the cumulative density function of the signal-to-noise ratio (SNR) at the output of an L-branch selection combining receiver, operating in Weibull fading, are derived.
Abstract: Novel expressions for the probability and the cumulative density function of the signal-to-noise ratio (SNR) at the output of an L-branch selection combining receiver, operating in Weibull fading, are derived Capitalising on these expressions, the outage probability and the average output SNR are obtained in closed-form

Journal ArticleDOI
TL;DR: In this paper, a hazard function is defined as a characterization of the risk of an event occurring at a given point, conditionalized by the fact that the event has not already occurred.

Journal ArticleDOI
TL;DR: In this article, a methodology based on function approximations and the convolution theorem is presented to estimate the structural failure probability, which is applicable to structural reliability problems with any number of random variables and any kind of random variable distribution.

Journal ArticleDOI
TL;DR: The cumulative probability distribution of sparseness time interval in the Internet is studied by the method of data analysis and the data are found to be well described by q-exponential distributions, which maximize the Tsallis entropy indexed by q less or larger than unity.
Abstract: The cumulative probability distribution of sparseness time interval in the Internet is studied by the method of data analysis. Round-trip time between a local host and a destination host through ten odd routers is measured using the ping command, i.e., doing an echo experiment. The data are found to be well described by q-exponential distributions, which maximize the Tsallis entropy indexed by q less or larger than unity, showing a scale-invariant feature of the system. The network is observed to itinerate over a series of the nonequilibrium stationary states characterized by Tsallis statistics.

Journal ArticleDOI
TL;DR: It is proposed that the cumulative probability be used instead of probability density when transforming non-uniform distributions for FAST to increase the accuracy of transformation by reducing errors, and makes the transformation more convenient to be used in practice.
Abstract: The Fourier amplitude sensitivity test (FAST) can be used to calculate the relative variance contribution of model input parameters to the variance of predictions made with functional models. It is widely used in the analyses of complicated process modeling systems. This study provides an improved transformation procedure of the Fourier amplitude sensitivity test (FAST) for non-uniform distributions that can be used to represent the input parameters. Here it is proposed that the cumulative probability be used instead of probability density when transforming non-uniform distributions for FAST. This improvement will increase the accuracy of transformation by reducing errors, and makes the transformation more convenient to be used in practice. In an evaluation of the procedure, the improved procedure was demonstrated to have very high accuracy in comparison to the procedure that is currently widely in use.

Journal ArticleDOI
TL;DR: The asymptotic probability density of nonlinear phase noise, often called the Gordon-Mollenauer effect, is derived analytically when the number of fiber spans is large.
Abstract: The asymptotic probability density of nonlinear phase noise, often called the Gordon-Mollenauer effect, is derived analytically when the number of fiber spans is large. Nonlinear phase noise is the summation of infinitely many independently distributed noncentral chi2 random variables with two degrees of freedom. The mean and the standard deviation of those random variables are both proportional to the square of the reciprocal of all odd natural numbers. Nonlinear phase noise can also be accurately modeled as the summation of a noncentral chi2 random variable with two degrees of freedom and a Gaussian random variable.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a nonparametric test for the drift and variance components of a continuous time model, based on the block bootstrap specification test for diffusion processes.
Abstract: This paper introduces bootstrap specification tests for diffusion processes In the one-dimensional case, the proposed test is closest to the nonparametric test introduced by Ait-Sahalia (1996), in the sense that both procedures determine whether the drift and variance components of a particular continuous time model are correctly specified However, we compare cumulative distribution functions, while Ait-Sahalia compares densities In the multidimensional and/or multifactor case, the proposed test is based on the comparison of the empirical CDF of the actual data and the empirical CDF of the simulated data The limiting distributions of both tests are functionals of zero mean Gaussian processes with covariance kernels that reflect data dependence and parameter estimation error (PEE) In order to obtain asymptotically valid critical values for the tests, we use an empirical process version of the block bootstrap which properly accounts for the contribution of PEE An example based on a simple version of the Cox, Ingersol and Ross (1985) square root process is outlined and related Monte Carlo experiments are carried out These experiments suggest that the test has good finite sample properties, even for samples as small as 400 observations when tests are formed using critical values constructed with as few as 100 bootstrap replications

Proceedings ArticleDOI
Wang1, Vemuri1, Rao, Chen
01 Jan 2003
TL;DR: The cumulative residual entropy (CRE) as mentioned in this paper is a measure of information that parallels Shannon entropy, which is used to solve the uni-and multimodal image alignment problem for parameterized (rigid, affine and projective) transformations.
Abstract: We use the cumulative distribution of a random variable to define the information content in it and use it to develop a novel measure of information that parallels Shannon entropy, which we dub cumulative residual entropy (CRE) The key features of CRE may be summarized as, (1) its definition is valid in both the continuous and discrete domains, (2) it is mathematically more general than the Shannon entropy and (3) its computation from sample data is easy and these computations converge asymptotically to the true values We define the cross-CRE (CCRE) between two random variables and apply it to solve the uni- and multimodal image alignment problem for parameterized (rigid, affine and projective) transformations The key strengths of the CCRE over using the now popular mutual information method (based on Shannon's entropy) are that the former has significantly larger noise immunity and a much larger convergence range over the field of parameterized transformations These strengths of CCRE are demonstrated via experiments on synthesized and real image data

Patent
29 May 2003
TL;DR: In this article, a block-based statistical timing analysis technique is provided in which the delay and arrival times in the circuit are modeled as random variables, which leads to efficient expressions for both max and addition operations.
Abstract: A block-based statistical timing analysis technique is provided in which the delay and arrival times in the circuit are modeled as random variables. The arrival times are modeled as Cumulative Probability Distribution Functions (CDFs) and the gate delays are modeled as Probability Density Functions (PDFs). This leads to efficient expressions for both max and addition operations, the two key functions in both regular and statistical timing analysis. Although the proposed approach can handle any form of the CDF, the CDFs may also be modeled as piecewise linear for computational efficiency. The dependency caused by reconvergent fanout is addressed, which is a necessary first step in a statistical STA framework. Reconvergent fanouts are efficiently handled by a common mode removal approach using statistical “subtraction.”

Journal ArticleDOI
TL;DR: The proposed communication structure in important downlink categories is investigated, showing that, in theory, the message architecture is optimally efficient for all (M, 1) systems and extremely efficient when M/spl Gt/N.
Abstract: We consider a multielement antenna system that uses M transmit and N receive antennas [an (M,N) wireless link] impaired by additive white Gaussian noise in a quasistatic flat-fading channel environment. The transmitter, which is subject to a power constraint, does not know the random outcome of the matrix channel but does know the channel statistics. The link operates under a probability of outage constraint. We present a novel architecture using stratified space-time diagonals to express a message for efficient communications. The special message arrangement, which is termed stratified-diagonal-BLAST (SD-BLAST), enables receiver signal processing that substantially mutes self interference caused by multipath without incurring waste of space-time. We investigate the proposed communication structure in important downlink categories, showing that, in theory, the message architecture is optimally efficient for all (M, 1) systems and extremely efficient when M/spl Gt/N. We quantify the capacity performance of SD-BLAST using empirically generated complementary cumulative distribution functions (CCDFs) for (16, 5), (8, 3), and (4, 2) systems to exhibit near optimal performance most especially for the (16, 5) system.

Journal ArticleDOI
TL;DR: A method for estimating and validating the cumulative distribution of a function of random variables (independent or dependent) is presented and preliminary numerical experiments indicate that this approximation is close to the actual distribution after a few iterations.
Abstract: A method for estimating and validating the cumulative distribution of a function of random variables (independent or dependent) is presented and examined. The method creates a sequence of bounds that will converge to the distribution function in the limit for functions of independent random variables or of random variables of known dependencies. Moreover, an approximation is constructed from and contained in these bounds. Preliminary numerical experiments indicate that this approximation is close to the actual distribution after a few iterations. Several examples are given to illustrate the method.

Proceedings ArticleDOI
13 Oct 2003
TL;DR: This work discusses new jitter and noise modeling and analysis methods for both design and test based on statistical signal theory invoking probability density function and cumulative distribution function and the corresponding component distributions of deterministic and random to quantify jitter, noise, and bit error rate for communication systems.
Abstract: As the communication speed/data rate approaches 1 Gb/s and beyond, timing jitter and amplitude noise become the major limiting factors for system performance. Traditional methods used in simulating, analyzing, modeling, and quantifying jitter and noise in terms of peak-to-peak and/or RMS become no longer accurate and sufficient. As such, new methods with better accuracy and comprehension are called for. We first discuss new jitter and noise modeling and analysis methods for both design and test based on statistical signal theory invoking probability density function (pdf) and cumulative distribution function (cdf) and the corresponding component distributions of deterministic and random to quantify jitter, noise, and bit error rate (BER) for communication systems. Secondly, we introduce jitter and noise transfer functions and their important roles-played in estimating relevant jitter, noise, and BER in the system. Thirdly, we introduce and illustrate how those methods can be used in designing and testing > Gb/s communication systems, such as fibre channel (FC), giga bit Ethernet (GBE), and PCI Express.

Journal ArticleDOI
TL;DR: In this paper, a saddlepoint approximation to the cumulative distribution function of a random vector is presented, which can be applied to random vectors of arbitrary length, subject only to the requirement that the distribution approximated either have a density or be confined to a lattice, and have a cumulant generating function.
Abstract: This paper presents a saddlepoint approximation to the cumulative distribution function of a random vector. The proposed approximation has accuracy comparable to that of existing expansions valid in two dimensions, and may be applied to random vectors of arbitrary length, subject only to the requirement that the distribution approximated either have a density or be confined to a lattice, and have a cumulant generating function. The result is derived by directly inverting the multivariate moment generating function. The result is applied to sufficient statistics from a regression model with exponential errors, and compared to an existing method in two dimensions. The result is also applied to multivariate inference from a data set arising from a case-control study of endometrial cancer.

Journal ArticleDOI
TL;DR: In this paper, a bi-exponential probability density function was proposed that fits the observed behaviour of daily clearness indices, which is shown to be the most usual shape at 50 locations between latitudes 18.43°N and 64.81°N.

Journal ArticleDOI
TL;DR: In this article, the authors presented the theoretical expressions of the cumulative distribution function (cdf) of the maximum intensity during a reference period taking the temporal variation of load intensity into account.