scispace - formally typeset
Search or ask a question

Showing papers on "Cumulative distribution function published in 2002"


ReportDOI
01 Nov 2002
TL;DR: The following techniques for uncertainty and sensitivity analysis are briefly summarized: Monte Carlo analysis, differential analysis, response surface methodology, Fourier amplitude sensitivity test, Sobol’ variance decomposition, and fast probability integration.
Abstract: The following techniques for uncertainty and sensitivity analysis are briefly summarized: Monte Carlo analysis, differential analysis, response surface methodology, Fourier amplitude sensitivity test, Sobol’ variance decomposition, and fast probability integration. Desirable features of Monte Carlo analysis in conjunction with Latin hypercube sampling are described in discussions of the following topics: (i) properties of random, stratified and Latin hypercube sampling, (ii) comparisons of random and Latin hypercube sampling, (iii) operations involving Latin hypercube sampling (i.e. correlation control, reweighting of samples to incorporate changed distributions, replicated sampling to test reproducibility of results), (iv) uncertainty analysis (i.e. cumulative distribution functions, complementary cumulative distribution functions, box plots), (v) sensitivity analysis (i.e. scatterplots, regression analysis, correlation analysis, rank transformations, searches for nonrandom patterns), and (vi) analyses involving stochastic (i.e. aleatory) and subjective (i.e. epistemic) uncertainty. Published by Elsevier Science Ltd.

644 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of assessing the distributional consequences of a treatment on some outcome variable of interest when treatment intake is (possibly) nonrandomized, but there is a binary-instrument available for the researcher.
Abstract: This article considers the problem of assessing the distributional consequences of a treatment on some outcome variable of interest when treatment intake is (possibly) nonrandomized, but there is a binaryinstrument available for the researcher. Such a scenario is common in observational studies and in randomized experiments with imperfect compliance. One possible approach to this problem is to compare the counterfactual cumulative distribution functions of the outcome with and without the treatment. This article shows how to estimate these distributions using instrumental variable methods and a simple bootstrap procedure is proposed to test distributional hypotheses, such as equality of distributions, first-order and second-order stochastic dominance. These tests and estimators are applied to the study of the effects of veteran status on the distribution of civilian earnings. The results show a negative effect of military service during the Vietnam era that appears to be concentrated on the lower tail of ...

614 citations


Patent
28 Mar 2002
TL;DR: In this article, a multilevel optical receiver (150) can comprise a plurality of comparators (405) that generally correspond with the number of levels in a multi-level data stream.
Abstract: A multilevel optical receiver (150) can comprise a plurality of comparators (405) that generally correspond with the number of levels in a multilevel data stream. Each comparator (405) can be individually controlled and fed a decision threshold in order to decode a multilevel signal. The multilevel optical receiver (150) can generate a statistical characterization of the received symbols in the form of a marginal cumulative distribution function (CDF) or probability density function (pdf). This characterization can be used to produce a set of e-support estimates from which conditional pdfs are derived for each of the transmission symbols. These conditional pdfs may then be used to determine decision thresholds for decoding the received signal. The conditional pdfs may further be used to continuously estimate the fidelity or error rate of the received signal without the transmission of a testing sequence. The e-supports may further be used to automatically control the gain on the receiver.

181 citations


Journal ArticleDOI
TL;DR: This work focuses on other performance measures, namely, average combined output signal-to-noise ratio, amount of fading, and outage probability, which depend only on the moments and the outage probability depends solely on the cdf.
Abstract: A great deal of attention has been devoted in the literature to studying the bit error rate (BER) performance of diversity combining systems in the presence of Rayleigh, Rice, and Nagakami-m fading. By comparison, the literature is relatively sparse in comparable analyses over log-normal channels which typically characterize shadowing from indoor obstacles and moving human bodies. One reason for this disparity stems from the difficulty in evaluating the exact average BER when log-normal variates are involved, using, for example, the moment-generating function (MGF) approach, due to the inability of expressing the MGF itself in a simple closed form. Since it is possible to evaluate the marginal and joint statistical moments as well as the cumulative distribution function (CDF) associated with a log-normal distribution in closed form, we rather focus here on other performance measures, namely, average combined output signal-to-noise ratio, amount of fading, and outage probability. The first two performance measures depend only on the moments, whereas the outage probability depends solely on the cdf. Closed-form expressions (in terms of known functions), single-integral representations, or upper and lower bounds are obtained for these measures corresponding to maximal-ratio combining, selection combining, and switch-and-stay combining schemes, allowing for the possibility of correlation between the two branches. Numerical evaluations of these expressions illustrating the performances of each individual diversity type as well as comparisons among them are also presented.

116 citations


Journal ArticleDOI
TL;DR: The probabilistic fatigue analysis methodology developed for DARWIN to address hard alpha material anomalies is presented and it is shown that the life approximation function and importance sampling methods significantly reduce computation time compared to the Monte Carlo method.
Abstract: Conventional gas turbine rotor life prediction methodologies are based on nominal conditions that do not adequately account for material and manufacturing anomalies that can degrade the structural integrity of high-energy rotors. To account for these anomalies, the Rotor Integrity Subcommittee of the Aerospace Industries Association recommended adoption of a probabilistic damage tolerance approach to supplement the current safe-life methodology. The recommendation led to the development of a computer program called DARWIN that computes the probability of fracture as a function of flight cycles, considering random defect occurrence and location, random inspection schedules, and several other random variables. The probabilistic fatigue analysis methodology developed for DARWIN to address hard alpha material anomalies is presented. The capability of this computer program is demonstrated using several realistic rotor models provided by aircraft engine manufacturers. It is shown that the life approximation function and importance sampling methods significantly reduce computation time (nearly two orders of magnitude) compared to the Monte Carlo method. In addition, an optimal zone sampling strategy is presented that can minimize the total number of samples required to achieve e a a desired sampling accuracy result for a given confidence interval. This probabilistic methodology can be used to focus design efforts on variables that have the most influence on risk reduction.

101 citations


Journal ArticleDOI
TL;DR: Analytical results provide insights into the tradeoff between diversity gain and combination losses, in concert with increasing orders of diversity branches in an energy-sharing communication system.
Abstract: The paper examines the impact of Gaussian distributed weighting errors (in the channel gain estimates used for coherent combination) on both the output statistics of a hybrid selection/maximal-ratio (SC/MRC) receiver and the degradation of the average symbol-error rate (ASER) performance as compared with the ideal case. New expressions are derived for the probability density function, cumulative distribution function and moment generating function (MGF) of the coherent hybrid SC/MRC combiner output signal-to-noise ratio (SNR). The MGF is then used to derive exact, closed-form, ASER expressions for binary and M-ary modulations in conjunction a nonideal hybrid SC/MRC receiver in a Rayleigh fading environment. Results for both selection combining (SC) and maximal-ratio combining (MRC) are obtained as limiting cases. Additionally, the effect of the weighting errors on both the outage rate of error probability and the average combined SNR is investigated. These analytical results provide insights into the tradeoff between diversity gain and combination losses, in concert with increasing orders of diversity branches in an energy-sharing communication system.

98 citations


01 Jan 2002
TL;DR: The concept of a random variable (RV) was introduced by as discussed by the authors, and continuous and discrete RVs were used to estimate the mean and variance of the expected value/mean and variance, moments and characteristic functions.
Abstract: Probability theory: Review of Set theory; introduction to probability, axioms of probability; joint and conditional probability; Bayes theorem. Random Variables: The concept of a random variable (RV); continuous and discrete RVs; probability distribution and density functions, properties; some standard examples; Functions of an RV, distribution and densities of functions of an RV, examples; expected value/mean and variance; moments and characteristic functions; two RVs: joint distribution and density functions; correlation, covariance, orthogonality and independence; conditional distribution and density functions. Elements of Estimation theory: Estimation of mean and variance; Chebyshev inequality; Parameter Estimation, Properties of Estimators; Cramer-Rao bound. Stochastic Processes: Introduction, Statistics of stochastic processes, correlation and covariance; Stationarity; Autocorrelation, Power density spectrum, and Wiener Khinchin Theorem; Linear Systems with stochastic inputs.

95 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived theoretical first-order probability density functions for the energy density and magnitude of electromagnetic fields inside mode-tuned or mode-stirred reverberation chambers operated at relatively low frequencies.
Abstract: Novel theoretical first-order probability density functions are derived for the energy density and magnitude of electromagnetic fields inside mode-tuned or mode-stirred reverberation chambers operated at relatively low frequencies. Deviations of physical characteristics for fields in undermoded chambers from those for ideal reverberation are quantified. These deviations are then used as parameters of the distributions. The distribution parameters can be easily and independently calculated from the measured tuner sweep data functions. The derivation is based on an eigenvalue decomposition of the 3/spl times/3 polarization matrix for the stir-averaged local field, followed by a polarization decomposition of the principal components. The theoretical distributions are compared with measured data, showing improved agreement and a significantly lower mismatch at lower frequencies compared to ideal /spl chi//sub 6//sup (2)/ distributions. The previously observed "flattening" of the cumulative distribution function is confirmed, resulting in a now calculable decrease of the mean value and an increase of the uncertainty for field statistics as frequency is lowered.

74 citations


Journal ArticleDOI
TL;DR: In this paper, the bivariate Nakagami-m distribution with arbitrary fading parameters is derived, obtaining the probability density function, the cumulative density function and the central moments, and limitations of that distribution are discussed.
Abstract: The bivariate Nakagami-m distribution with arbitrary fading parameters is derived, obtaining the probability density function, the cumulative density function and the central moments. Additionally, limitations of that distribution are discussed.

66 citations


Journal ArticleDOI
TL;DR: In this article, a proof is given of a formula used by A. F. Jenkinson in the 1970s that converts data that are ranked according to their magnitude into an estimate of the associated cumulative probability.
Abstract: It is often useful to make initial estimates of changing extremes without the use of a specific statistical model, though a statistical model is likely to be desirable as a second step. A proof is given of a formula used by A. F. Jenkinson in the 1970s that converts data that are ranked according to their magnitude into an estimate of the associated cumulative probability. This formula is compared to its exact equivalent, based on a beta distribution of the first kind. It is also compared to similar ranking formulas, which have been recommended, mostly in hydrology, based on similar ideas. Some results concerning the effect of serial correlation on Jenkinson's formula are reported. For initial estimates of return periods or percentiles of cumulative probability from time series of data, Jenkinson's method performs as well as many of the other methods. Empirical ranking methods are not so useful for estimating the rarest percentiles in climatology, those in the most extreme 100/N% tails of the dis...

59 citations


Journal ArticleDOI
TL;DR: In this paper, the authors relate the structure of intersection graphs to the support of the nonparametric maximum likelihood estimate (NPMLE) of the cumulative distribution function (CDF) for multivariate data, and distinguish two types of non-uniqueness of the NPMLE: representational and mixture.
Abstract: Right, left or interval censored multivariate data can be represented by an intersection graph. Focussing on the bivariate case, the authors relate the structure of such an intersection graph to the support of the nonparametric maximum likelihood estimate (NPMLE) of the cumulative distribution function (CDF) for such data. They distinguish two types of non-uniqueness of the NPMLE: representational, arising when the likelihood is unaffected by the distribution of the estimated probability mass within regions, and mixture, arising when the masses themselves are not unique. The authors provide a brief overview of estimation techniques and examine three data sets.

Journal ArticleDOI
TL;DR: In this article, a statistical process control chart called the cumulative probability control chart (CPC-chart) is proposed, which is motivated from two existing statistical control charts, the cumulative count control chart and the cumulative quantity control chart, and it can resolve a technical plotting inconvenience of the CCC- and CQC-charts.
Abstract: A statistical process control chart called the cumulative probability control chart (CPC-chart) is proposed. The CPC-chart is motivated from two existing statistical control charts, the cumulative count control chart (CCC-chart) and the cumulative quantity control chart (CQC-chart). The CCC- and CQC-charts are effective in monitoring production processes when the defect rate is low and the traditional p - and c -charts do not perform well. In a CPC-chart, the cumulative probability of the geometric or exponential random variable is plotted against the sample number, and hence the actual cumulative probability is indicated on the chart. Apart from maintaining all the favourable features of the CCC- and CQC-charts, the CPC-chart is more flexible and it can resolve a technical plotting inconvenience of the CCC- and CQC-charts.

Journal ArticleDOI
TL;DR: The Kolmogorov-Smirnov (K-S) test shows that there is no significant discrepancy between the estimated and measured distribution of PM10 and PM2.5 at the 95% confidence level, so the distribution of air pollutants is easily estimated when the wind speed data are known.

Posted Content
TL;DR: In this article, a Bayesian approach to goodness-of-fit is proposed to test whether or not a given parametric model is compatible with the data at hand, and a new test procedure based on U%V W T /YX D 1;Z *\[^] is proposed.
Abstract: We consider a Bayesian approach to goodness of fit, that is, to the problem of testing whether or not a given parametric model is compatible with the data at hand. We thus consider a parametric family ! #"%$ where denotes a cumulative distribution function with parameter . The null hypothesis is & ')(+*-,. for an unknown , that is, there exists such that /0*#12,435/768 :9 1 . If &)' does not hold, /0*#1 is a random variable on /768 :9 1 which is not distributed as 35/768 ;9 1 . The alternative nonparametric hypothesis can thus be interpreted as /0*#1 being distributed from a general cdf is infinite dimensional. Instead of using a functional basis as in Verdinelli and Wasserman (1998), we represent <%= as the (infinite) mixture of Beta distributions, ? '@35/768 :9 1BAC/D9FEG? ':1 HJI K L ? I MON /0P I DQ I 1SR Estimation within both parametric and nonparametric structures are implemented using MCMC algorithms that estimate the number of components in the mixture. Since we are concerned with a goodness of fit problem, it is more of interest to consider a functional distance to the tested model T /7 2 D 1 as the basis of our test, rather than the corresponding Bayes factor, since the later puts more emphasis on the parameters. We therefore propose a new test procedure based on U%V W T /YX D 1;Z *\[^] , with both an asymptotic justification and a finite sampler implementation.

Journal ArticleDOI
TL;DR: A compact expression is derived for the moment-generating function (MGF) of the output of a generalized selection combiner for independent but nonidentically distributed Rayleigh fading paths which is then used to derive a single integral expression for the symbol error probability of, for example, M-PSK.
Abstract: We derive a compact expression for the moment-generating function (MGF) of the output of a generalized selection combiner for independent but nonidentically distributed Rayleigh fading paths which is then used to derive a single integral expression for the symbol error probability of, for example, M-PSK. Because of the simple product form of the MGF and its ability to be expressed as a partial fraction expansion, we are then able to invert this function to derive the probability density function and cumulative distribution function of the combiner output from which the outage probability can then be evaluated in closed form. An example, of the latter is given for an exponentially decaying power delay profile.

Journal ArticleDOI
TL;DR: In this paper, the diagonal expansion of a bivariate density may also be formulated using integral operators on the cumulative distribution function of the density matrix, and a continuous extension is made for correspondence analysis on two categorical variables.

Proceedings ArticleDOI
16 Dec 2002
TL;DR: An efficient deterministic simulation model for the Nakagami-Hoyt (1960, 1947) fading channel model (Q model) is proposed by using the concept of Rice's sum of sinusoids.
Abstract: An efficient deterministic simulation model for the Nakagami-Hoyt (1960, 1947) fading channel model (Q model) is proposed by using the concept of Rice's sum of sinusoids. Analytical formulas are derived for the amplitude and phase probability density functions (PDF), level-crossing rate (LCR) and average duration of fades (ADF). By using a numerical optimization procedure, we show how the statistical properties of the Q model and the corresponding simulation model can be adapted to those of an equivalent mobile satellite channel for an environment with heavy shadowing. It is demonstrated by several theoretical and simulation results that the Q model and the therefrom derived simulation model can provide excellent characterization for the corresponding measurement results with respect to the complementary cumulative distribution function (CDF), normalized LCR and ADF. Finally, it is shown that the Q model enables a better statistical fitting to the measurement data compared to the Rayleigh model due to the increased flexibility.

Journal ArticleDOI
TL;DR: In this article, several pre-analysis measures which help to expose the behavior of L1 -norm minimization solutions are described. But they are mainly based on familiar elements of the linear programming solution to L1-norm minimisation, such as slack variables and the reduced-cost vector.
Abstract: Several pre-analysis measures which help to expose the behavior of L1 -norm minimization solutions are described. The pre-analysis measures are primarily based on familiar elements of the linear programming solution to L1-norm minimization, such as slack variables and the reduced-cost vector. By examining certain elements of the linear programming solution in a probabilistic light, it is possible to derive the cumulative distribution function (CDF) associated with univariate L1-norm residuals. Unlike traditional least squares (LS) residual CDFs, it is found that L1-norm residual CDFs fail to follow the normal distribution in general, and instead are characterized by both discrete and continuous (i.e. piecewise) segments. It is also found that an L1 equivalent to LS redundancy numbers exists and that these L1 equivalents are a byproduct of the univariate L1 univariate residual CDF. Probing deeper into the linear programming solution, it is found that certain combinations of observations which are capable of tolerating large-magnitude gross errors can be predicted by comprehensively tabulating the signs of slack variables associated with the L1 residuals. The developed techniques are illustrated on a two-dimensional trilateration network.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an extended Weibull distribution (EWD) model for breakdown voltage estimation, which can avoid the difficulties which appear in the conventional WFD model.
Abstract: Although the Weibull distribution is widely used in a variety of reliability applications, difficulties in its treatment, particularly in three parameter cases in the maximum likelihood estimation, hinder us from using the distribution. The extended Weibull distribution proposed by Marshall and Olkin (1997) can avoid the difficulties which appear in the conventional Weibull distribution models. This paper shows the maximum likelihood estimation method in the extended Weibull distribution model. The paper also illustrates some typical applications for breakdown voltage estimation in which the extended models are superior to the conventional Weibull models. The central discussion is whether the shape parameters in the extended model accomplish the mass shifting effect of the distribution.

Journal ArticleDOI
01 Feb 2002-Metrika
TL;DR: In this article, the authors provided a comparison between C′′pmk and other existing generalizations of CPMK on the accuracy of measuring process performance for processes with asymmetric tolerances.
Abstract: Pearn et al. (1999) considered a capability index C′′pmk, a new generalization of Cpmk, for processes with asymmetric tolerances. In this paper, we provide a comparison between C′′pmk and other existing generalizations of Cpmk on the accuracy of measuring process performance for processes with asymmetric tolerances. We show that the new generalization C′′pmk is superior to other existing generalizations of Cpmk. Under the assumption of normality, we derive explicit forms of the cumulative distribution function and the probability density function of the estimated index \(\). We show that the cumulative distribution function and the probability density function of the estimated index \(\) can be expressed in terms of a mixture of the chi-square distribution and the normal distribution. The explicit forms of the cumulative distribution function and the probability density function considerably simplify the complexity for analyzing the statistical properties of the estimated index \(\).

Journal ArticleDOI
TL;DR: Two approaches for the calculation of the average outage duration (AOD) of diversity systems over generalized fading channels are presented, using exact closed-form expressions for the AOD of maximal-ratio combiner over independent and identically distributed (i.i.d.) Rayleigh and Rice fading channels.
Abstract: This paper presents two approaches for the calculation of the average outage duration (AOD) of diversity systems over generalized fading channels. First, a "classical" probability density function (pdf)-based approach is used to obtain exact closed-form expressions for the AOD of maximal-ratio combiner (MRC) over independent and identically distributed (i.i.d.) Rayleigh and Rice fading channels. On the other hand, relying upon a numerical technique for inverting Laplace transforms of cumulative distribution functions, and in conjunction with the calculation of the joint characteristic function (CF) of the combined output signal-to-noise ratio process and its time derivative, a CF-based approach is adopted to compute the AOD of MRC over non-i.i.d. Rayleigh and Rician diversity paths. The mathematical expressions are illustrated by presenting and interpreting numerical results/plots, showing the impact of the power delay profile, the angles of arrival, and the angle spreads on the AOD of diversity systems operating over typical fading channels of practical interest.

Proceedings ArticleDOI
07 Aug 2002
TL;DR: This work investigates the performance of both switch and stay combining (SSC) and switch and examine combining (SEC) multi-branch diversity schemes and proves that for SSC with identically distributed and uniformly correlated branches, increasing the number of branches to more than two does not improve the performance.
Abstract: We investigate the performance of both switch and stay combining (SSC) and switch and examine combining (SEC) multi-branch diversity schemes We first derive generic formulas for the cumulative distribution function, probability density function, and moment generating function of the combiner output signal for both schemes We then capitalize on these expressions to obtain closed-form expressions for the outage probability and average error rate for various communication scenarios of interest As a byproduct of our analysis we prove that for SSC with identically distributed and uniformly correlated branches, increasing the number of branches to more than two does not improve the performance but the performance can be different in the case the branches are not identically distributed We also show that, in general, the SEC performance improves with additional branches The mathematical formalism is illustrated with a number of selected numerical examples

Journal ArticleDOI
TL;DR: In this article, the statistical properties of the phase errors are related to the waveguide imperfections using a variation of the effective index method, and the filtering quality of the AWG is investigated by considering the behavior of its transfer function in the presence of random phase errors.
Abstract: Arrayed waveguide gratings (AWGs) are important components for the realization of wavelength-division multiplexing optical networks. Their filtering performance is limited by the existence of phase errors in the grating waveguides due to fabrication imperfections. In this paper, the statistical properties of the phase errors are related to the waveguide imperfections using a variation of the effective index method. The filtering quality of the AWG is then investigated by considering the behavior of its transfer function in the presence of random phase errors. The probability density function of the transfer function's sidelobes is evaluated numerically, and the results are justified using theoretical considerations. Finally, the behavior of the maximum sidelobe level is also analyzed numerically, and universal diagrams are presented that allow the estimation of its mean value, standard deviation, and cumulative distribution function for every specific AWG.

Proceedings ArticleDOI
13 May 2002
TL;DR: This paper studies the outage performance of a simple space-time block code, the Alamouti scheme, in the presence of correlated Rayleigh or Ricean fading, and derives expressions for the cumulative distribution function of the uncoded symbol error rate.
Abstract: The performance of space-time block codes is well understood from an average (over the random channel) error point of view. However, inherent to the idea of diversity gain is the issue of reliability, which is better captured through an outage analysis indicating the quality of performance guaranteed with a certain level of reliability. In this paper, we study the outage performance of a simple space-time block code, the Alamouti scheme, in the presence of correlated Rayleigh or Ricean fading. We derive expressions for the cumulative distribution function of the uncoded symbol error rate and verify the accuracy of our analytical expressions through comparison with numerical results. In addition, we introduce a quantitative measure to compare the diversity gain offered by two channels at a given outage rate.

Patent
30 Sep 2002
TL;DR: In this article, the conditional legitimate probability of each packet is evaluated based on Bayesian estimation technique, which is accomplished by comparing the attributes carried by an incoming packet against the "nominal" distribution of attributes of legitimate packet stream.
Abstract: The present invention is a methodology to prioritize packets based on the conditional probability that given the values of attributes carried by packet, the packet is a legitimate one. We will call this the conditional legitimate probability of a packet from here onward. The conditional probability of each packet is evaluated based on Bayesian estimation technique. This is accomplished by comparing the attributes carried by an incoming packet against the “nominal” distribution of attributes of legitimate packet stream. Since an exact prioritization of packets based on their conditional legitimate probability would require offline, multiple-pass operations, e.g. sorting, we take the following alternative approach to realize an online, one-pass selectively dropping scheme. In particular, we maintain the cumulative distribution function (CDF) of the conditional legitimate probability of all incoming packets and apply a threshold-based selective dropping mechanism according to the conditional probability value computed for each incoming packet. To speed-up the computation of the conditional legitimate probability for each incoming packet, we may, as an alternative, use the logarithmic version of the equation to implement the Bayesian estimation process. Other features of the invention include: providing means to guarantee minimum throughput of particular (pre-configured) type(s) of packets; providing a. Filtering Mechanism to suppress the noise during estimation/maintenance of nominal attributes distribution; applying state-of-the-art efficient algorithm/data-structures for quantile and histogram building/updates; using the proven, industrial-strength load-shedding algorithms as a submodule in the overload control algorithm; and being amenable to practical implementation to support online, one-pass processing on high-speed communication links.

Journal ArticleDOI
TL;DR: In this article, the authors considered the problem of numerically evaluating the cumulative distribution function of a quadratic form in normal variables and investigated the efficiency of two new truncation bounds.
Abstract: This paper is concerned with the technique of numerically evaluating the cumulative distribution function of a quadratic form in normal variables. The efficiency of two new truncation bounds and all existing truncation bounds are investigated. We also find that the suggestion in the literature for further splitting truncation errors might reduce computational efficiency, and the optimum splitting rate could be different in different situations. A practical solution is provided. The paper also discusses a modified secant algorithm for finding the critical value of the distribution at any given significance level.

Journal ArticleDOI
TL;DR: This work derives an analytical expression in the form of a probability function describing the distribution of random times necessary for a particle to diffusively travel a specified distance that is amenable to use in modeling both fractured and porous media systems.

Proceedings ArticleDOI
10 Dec 2002
TL;DR: In this paper, the authors extended the Khatri distribution to obtain closed-form expressions for the outage probability and the channel capacity complementary cumulative distribution function of multiple-input-multiple-output (MIMO) systems employing maximal ratio combining (MRC) and operating over Rician fading channels.
Abstract: This paper extends the Khatri distribution (see Khatri, C.G., Ann. Math. Stat., vol.35, p.1807-10, 1964) of the largest eigenvalue of central complex Wishart matrices to the non-central case. It then applies the resulting new statistical results to obtain closed-form expressions for the outage probability and the channel capacity complementary cumulative distribution function (CCDF) of multiple-input-multiple-output (MIMO) systems employing maximal ratio combining (MRC) and operating over Rician fading channels. When applicable, these expressions are compared to special cases previously reported in the literature dealing with the outage probability of (i) MIMO systems over Rayleigh fading channels and (ii) single-input-multiple-output (SIMO) systems over Rician fading channels. As a double check these analytical results are validated by Monte-Carlo simulations and, as an illustration of the mathematical formalism, some numerical examples for particular cases of interests are plotted and discussed. These results show that, given a fixed number of total antenna elements, (i) SIMO systems are equivalent to multiple-input-single-output (MISO) systems and (ii) it is preferable to distribute the number of antenna elements evenly between the transmitter and the receiver for a minimum outage probability performance.

Journal ArticleDOI
TL;DR: In this article, the authors studied the analytic properties of the c.d. of a statistic under very general assumptions, and showed that there can be points where the C.d of a given statistic is not analytic, and such points do not depend on the parameters of the model but only on the properties of statistic itself.
Abstract: Often neither the exact density nor the exact cumulative distribution function (c.d.f.) of a statistic of interest is available in the statistics and econometrics literature (e.g., the maximum likelihood estimator of the autocorrelation coefficient in a simple Gaussian AR(1) model with zero start-up value). In other cases the exact c.d.f. of a statistic of interest is very complicated despite the statistic being “simple” (e.g., the circular serial correlation coefficient, or a quadratic form of a vector uniformly distributed over the unit n-sphere). The first part of the paper tries to explain why this is the case by studying the analytic properties of the c.d.f. of a statistic under very general assumptions. Differential geometric considerations show that there can be points where the c.d.f. of a given statistic is not analytic, and such points do not depend on the parameters of the model but only on the properties of the statistic itself. The second part of the paper derives the exact c.d.f. of a ratio of quadratic forms in normal variables, and for the first time a closed form solution is found. These results are then specialized to the maximum likelihood estimator of the autoregressive parameter in a Gaussian AR(1) model with zero start-up value, which is shown to have precisely those properties highlighted in the first part of the paper.

Journal ArticleDOI
TL;DR: In this article, the cumulative probability distribution function of the strength of compressed plates using a response surface approach is constructed by applying a first-order reliability method to calculate the probability of failure successively at different levels of loading.