scispace - formally typeset
Search or ask a question

Showing papers on "Cumulative distribution function published in 2008"


Journal ArticleDOI
TL;DR: This paper develops an efficient reliability analysis method that accurately characterizes the limit state throughout the random variable space and is both accurate for any arbitrarily shaped limit state and computationally efficient even for expensive response functions.
Abstract: Many engineering applications are characterized by implicit response functions that are expensive to evaluate and sometimes nonlinear in their behavior, making reliability analysis difficult. This paper develops an efficient reliability analysis method that accurately characterizes the limit state throughout the random variable space. The method begins with a Gaussian process model built from a very small number of samples, and then adaptively chooses where to generate subsequent samples to ensure that the model is accurate in the vicinity of the limit state. The resulting Gaussian process model is then sampled using multimodal adaptive importance sampling to calculate the probability of exceeding (or failing to exceed) the response level of interest. By locating multiple points on or near the limit state, more complex and nonlinear limit states can be modeled, leading to more accurate probability integration. By concentrating the samples in the area where accuracy is important (i.e., in the vicinity of the limit state), only a small number of true function evaluations are required to build a quality surrogate model. The resulting method is both accurate for any arbitrarily shaped limit state and computationally efficient even for expensive response functions. This new method is applied to a collection of example problems including one that analyzes the reliability of a microelectromechanical system device that current available methods have difficulty solving either accurately or efficiently.

804 citations


Book
25 Feb 2008
TL;DR: In this paper, the authors introduce the concept of probability distributions and define a set of properties of probability distribution functions, such as Bernoulli distribution, exponential distribution, and generalized exponential distribution.
Abstract: Preface. Acknowledgments. About the Authors. CHAPTER 1: Concepts of Probability. 1.1 Introduction. 1.2 Basic Concepts. 1.3 Discrete Probability Distributions. 1.3.1 Bernoulli Distribution. 1.3.2 Binomial Distribution. 1.3.3 Poisson Distribution. 1.4 Continuous Probability Distributions. 1.4.1 Probability Distribution Function, Probability Density Function, and Cumulative Distribution Function. 1.4.2 The Normal Distribution. 1.4.3 Exponential Distribution. 1.4.4 Student's t-distribution. 1.4.5 Extreme Value Distribution. 1.4.6 Generalized Extreme Value Distribution. 1.5 Statistical Moments and Quantiles. 1.5.1 Location. 1.5.2 Dispersion. 1.5.3 Asymmetry. 1.5.4 Concentration in Tails. 1.5.5 Statistical Moments. 1.5.6 Quantiles. 1.5.7 Sample Moments. 1.6 Joint Probability Distributions. 1.6.1 Conditional Probability. 1.6.2 Definition of Joint Probability Distributions. 1.6.3 Marginal Distributions. 1.6.4 Dependence of Random Variables. 1.6.5 Covariance and Correlation. 1.6.6 Multivariate Normal Distribution. 1.6.7 Elliptical Distributions. 1.6.8 Copula Functions. 1.7 Probabilistic Inequalities. 1.7.1 Chebyshev's Inequality. 1.7.2 Fr'echet-Hoeffding Inequality. 1.8 Summary. CHAPTER 2: Optimization. 2.1 Introduction. 2.2 Unconstrained Optimization. 2.2.1 Minima and Maxima of a Differentiable Function. 2.2.2 Convex Functions. 2.2.3 Quasiconvex Functions. 2.3 Constrained Optimization. 2.3.1 Lagrange Multipliers. 2.3.2 Convex Programming. 2.3.3 Linear Programming. 2.3.4 Quadratic Programming. 2.4 Summary. CHAPTER 3: Probability Metrics. 3.1 Introduction. 3.2 Measuring Distances: The Discrete Case. 3.2.1 Sets of Characteristics. 3.2.2 Distribution Functions. 3.2.3 Joint Distribution. 3.3 Primary, Simple, and Compound Metrics. 3.3.1 Axiomatic Construction. 3.3.2 Primary Metrics. 3.3.3 Simple Metrics. 3.3.4 Compound Metrics. 3.3.5 Minimal and Maximal Metrics. 3.4 Summary. 3.5 Technical Appendix. 3.5.1 Remarks on the Axiomatic Construction of Probability Metrics. 3.5.2 Examples of Probability Distances. 3.5.3 Minimal and Maximal Distances. CHAPTER 4: Ideal Probability Metrics. 4.1 Introduction. 4.2 The Classical Central Limit Theorem. 4.2.1 The Binomial Approximation to the Normal Distribution. 4.2.2 The General Case. 4.2.3 Estimating the Distance from the Limit Distribution. 4.3 The Generalized Central Limit Theorem. 4.3.1 Stable Distributions. 4.3.2 Modeling Financial Assets with Stable Distributions. 4.4 Construction of Ideal Probability Metrics. 4.4.1 Definition. 4.4.2 Examples. 4.5 Summary. 4.6 Technical Appendix. 4.6.1 The CLT Conditions. 4.6.2 Remarks on Ideal Metrics. CHAPTER 5: Choice under Uncertainty. 5.1 Introduction. 5.2 Expected Utility Theory. 5.2.1 St. Petersburg Paradox. 5.2.2 The von Neumann-Morgenstern Expected Utility Theory. 5.2.3 Types of Utility Functions. 5.3 Stochastic Dominance. 5.3.1 First-Order Stochastic Dominance. 5.3.2 Second-Order Stochastic Dominance. 5.3.3 Rothschild-Stiglitz Stochastic Dominance. 5.3.4 Third-Order Stochastic Dominance. 5.3.5 Efficient Sets and the Portfolio Choice Problem. 5.3.6 Return versus Payoff. 5.4 Probability Metrics and Stochastic Dominance. 5.5 Summary. 5.6 Technical Appendix. 5.6.1 The Axioms of Choice. 5.6.2 Stochastic Dominance Relations of Order n. 5.6.3 Return versus Payoff and Stochastic Dominance. 5.6.4 Other Stochastic Dominance Relations. CHAPTER 6: Risk and Uncertainty. 6.1 Introduction. 6.2 Measures of Dispersion. 6.2.1 Standard Deviation. 6.2.2 Mean Absolute Deviation. 6.2.3 Semistandard Deviation. 6.2.4 Axiomatic Description. 6.2.5 Deviation Measures. 6.3 Probability Metrics and Dispersion Measures. 6.4 Measures of Risk. 6.4.1 Value-at-Risk. 6.4.2 Computing Portfolio VaR in Practice. 6.4.3 Backtesting of VaR. 6.4.4 Coherent Risk Measures. 6.5 Risk Measures and Dispersion Measures. 6.6 Risk Measures and Stochastic Orders. 6.7 Summary. 6.8 Technical Appendix. 6.8.1 Convex Risk Measures. 6.8.2 Probability Metrics and Deviation Measures. CHAPTER 7: Average Value-at-Risk. 7.1 Introduction. 7.2 Average Value-at-Risk. 7.3 AVaR Estimation from a Sample. 7.4 Computing Portfolio AVaR in Practice. 7.4.1 The Multivariate Normal Assumption. 7.4.2 The Historical Method. 7.4.3 The Hybrid Method 217 7.4.4 The Monte Carlo Method. 7.5 Backtesting of AVaR. 7.6 Spectral Risk Measures. 7.7 Risk Measures and Probability Metrics. 7.8 Summary. 7.9 Technical Appendix. 7.9.1 Characteristics of Conditional Loss Distributions. 7.9.2 Higher-Order AVaR. 7.9.3 The Minimization Formula for AVaR. 7.9.4 AVaR for Stable Distributions. 7.9.5 ETL versus AVaR. 7.9.6 Remarks on Spectral Risk Measures. CHAPTER 8: Optimal Portfolios. 8.1 Introduction. 8.2 Mean-Variance Analysis. 8.2.1 Mean-Variance Optimization Problems. 8.2.2 The Mean-Variance Efficient Frontier. 8.2.3 Mean-Variance Analysis and SSD. 8.2.4 Adding a Risk-Free Asset. 8.3 Mean-Risk Analysis. 8.3.1 Mean-Risk Optimization Problems. 8.3.2 The Mean-Risk Efficient Frontier. 8.3.3 Mean-Risk Analysis and SSD. 8.3.4 Risk versus Dispersion Measures. 8.4 Summary. 8.5 Technical Appendix. 8.5.1 Types of Constraints. 8.5.2 Quadratic Approximations to Utility Functions. 8.5.3 Solving Mean-Variance Problems in Practice. 8.5.4 Solving Mean-Risk Problems in Practice. 8.5.5 Reward-Risk Analysis. CHAPTER 9: Benchmark Tracking Problems. 9.1 Introduction. 9.2 The Tracking Error Problem. 9.3 Relation to Probability Metrics. 9.4 Examples of r.d. Metrics. 9.5 Numerical Example. 9.6 Summary. 9.7 Technical Appendix. 9.7.1 Deviation Measures and r.d. Metrics. 9.7.2 Remarks on the Axioms. 9.7.3 Minimal r.d. Metrics. CHAPTER 10: Performance Measures. 10.1 Introduction. 10.2 Reward-to-Risk Ratios. 10.2.1 RR Ratios and the Efficient Portfolios. 10.2.2 Limitations in the Application of Reward-to-Risk Ratios. 10.2.3 The STARR. 10.2.4 The Sortino Ratio. 10.2.5 The Sortino-Satchell Ratio. 10.2.6 A One-Sided Variability Ratio. 10.2.7 The Rachev Ratio. 10.3 Reward-to-Variability Ratios. 10.3.1 RV Ratios and the Efficient Portfolios. 10.3.2 The Sharpe Ratio. 10.3.3 The Capital Market Line and the Sharpe Ratio. 10.4 Summary. 10.5 Technical Appendix. 10.5.1 Extensions of STARR. 10.5.2 Quasiconcave Performance Measures. 10.5.3 The Capital Market Line and Quasiconcave Ratios. 10.5.4 Nonquasiconcave Performance Measures. 10.5.5 Probability Metrics and Performance Measures. Index.

163 citations


Journal ArticleDOI
TL;DR: An efficient and accurate mean-value first order Saddlepoint Approximation (MVFOSA) method, which is generally more accurate than MVFOSM and more efficient than FORM because an iterative search process for the so-called Most Probable Point is not required.

153 citations


Journal ArticleDOI
TL;DR: Various computational approaches were investigated for analysing the impact of parameter uncertainty on predictions of streamflow for a water-balance hydrological model used in eastern Australia, and the shape (skewness) of the distribution had a significant effect on model output uncertainty.

133 citations


Journal ArticleDOI
TL;DR: Comparisons between the theoretically optimum system and that achieved by the uncoded system indicate that the performance gap between the two systems becomes small for low levels of correlation between the sensor observations, and this performance gap becomes more dramatic as correlations between the observations increase.
Abstract: An exact expression for the joint density of three correlated Rician variables is not available in the open literature. In this letter, we derive new infinite series representations for the trivariate Rician probability density function (pdf) and the joint cumulative distribution function (cdf). Our results are limited to the case where the inverse covariance matrix is tridiagonal. This case seems the most general one that is tractable with Miller?s approach and cannot be extended to more than three Rician variables. The outage probability of triple branch selective combining (SC) receiver over correlated Rician channels is presented as an application of the density function.

119 citations


Journal ArticleDOI
TL;DR: The authors showed that the Tversky-Kahneman probability weighting function is not increasing for all parameter values and therefore can assign negative decision weights to some outcomes, which in turn implies that Cumulative Prospect Theory could make choices not consistent with first-order stochastic dominance.
Abstract: Cumulative Prospect Theory has gained a great deal of support as an alternative to Expected Utility Theory as it accounts for a number of anomalies in the observed behavior of economic agents. Expected Utility Theory uses a utility function and subjective or objective probabilities to compare risky prospects. Cumulative Prospect Theory alters both of these aspects. The concave utility function is replaced by a loss-averse utility function and probabilities are replaced by decision weights. The latter are determined with a weighting function applied to the cumulative probability of the outcomes. Several different probability weighting functions have been suggested. The two most popular are the original proposal of Tversky and Kahneman and the compound-invariant form proposed by Prelec. This note shows that the Tversky-Kahneman probability weighting function is not increasing for all parameter values and therefore can assign negative decision weights to some outcomes. This in turn implies that Cumulative Prospect Theory could make choices not consistent with first-order stochastic dominance.

85 citations


Journal ArticleDOI
TL;DR: In this article, the exact probability density function of the maximum of arbitrary continuous dependent random variables and of absolutely continuous exchangeable random variables is derived for the case where the random variables have an elliptically contoured distribution.

83 citations


Journal ArticleDOI
TL;DR: In this paper, the authors discuss some commonly used methods for determining the significance of peaks in the periodograms of time series and present the results of Monte Carlo simulations for a specific unevenly spaced time series obtained for V403 Car.
Abstract: We discuss some commonly used methods for determining the significance of peaks in the periodograms of time series. We review methods for constructing the classical significance tests, their corresponding false alarm probability functions and the role played in these by independent random variables and by empirical and theoretical cumulative distribution functions. We discuss the concepts of independent frequencies and oversampling in periodogram analysis. We then compare the results of new Monte Carlo simulations for evenly spaced time series with results obtained previously by other authors, and present the results of Monte Carlo simulations for a specific unevenly spaced time series obtained for V403 Car.

79 citations


Journal ArticleDOI
TL;DR: It is pointed out here that this procedure often results in underestimation of the risk, because wrong probability plotting positions are widely used and theoretical extreme value distributions are asymptotic only, so that in many cases they bring misleading information to the analysis.

79 citations


Journal ArticleDOI
TL;DR: This paper presents an analytical performance investigation of transmit beamforming (BF) systems in Rayleigh product multiple-input multiple-output (MIMO) channels, and derives new closed-form expressions for the cumulative distribution function, probability density function, and moments of a product of independent complex Gaussian matrices to provide a complete statistical characterization of the received signal-to-noise ratio (SNR).
Abstract: This paper presents an analytical performance investigation of transmit beamforming (BF) systems in Rayleigh product multiple-input multiple-output (MIMO) channels. We first derive new closed-form expressions for the cumulative distribution function, probability density function, and moments of the maximum eigenvalue of a product of independent complex Gaussian matrices, which are used to provide a complete statistical characterization of the received signal-to-noise ratio (SNR). We then derive a number of key performance metrics, including outage probability, symbol error rate, and ergodic capacity. We examine, in detail, three important special cases of the Rayleigh product MIMO channel: the degenerate keyhole scenario and the multiple-input single-output and single-input multiple-output scenarios, for which we derive insightful closed-form expressions for various exact and asymptotic measures (e.g., diversity order, array gain, and high SNR power offset, among others). We also compare the performance of transmit BF with orthogonal space-time block codes and quantify the benefit of exploiting transmitter channel knowledge in Rayleigh product MIMO channels. This is shown to be significant even for low-dimensional systems.

77 citations


Journal ArticleDOI
TL;DR: In this paper, the expectation-maximization (EM) algorithm for Gaussian mixture modeling is improved via three statistical tests that have an increased capability to find the underlying model, while maintaining a low execution time.
Abstract: In this paper, the expectation-maximization (EM) algorithm for Gaussian mixture modeling is improved via three statistical tests. The first test is a multivariate normality criterion based on the Mahalanobis distance of a sample measurement vector from a certain Gaussian component center. The first test is used in order to derive a decision whether to split a component into another two or not. The second test is a central tendency criterion based on the observation that multivariate kurtosis becomes large if the component to be split is a mixture of two or more underlying Gaussian sources with common centers. If the common center hypothesis is true, the component is split into two new components and their centers are initialized by the center of the (old) component candidate for splitting. Otherwise, the splitting is accomplished by a discriminant derived by the third test. This test is based on marginal cumulative distribution functions. Experimental results are presented against seven other EM variants both on artificially generated data-sets and real ones. The experimental results demonstrate that the proposed EM variant has an increased capability to find the underlying model, while maintaining a low execution time.

Journal Article
TL;DR: Image Compression using Artificial Neural Networks is a topic where research is being carried out in various directions towards achieving a generalized and economical network.
Abstract: Image Compression using Artificial Neural Networks is a topic where research is being carried out in various directions towards achieving a generalized and economical network. Feedforward Networks using Back propagation Algorithm adopting the method of steepest descent for error minimization is popular and widely adopted and is directly applied to image compression. Various research works are directed towards achieving quick convergence of the network without loss of quality of the restored image. In general the images used for compression are of different types like dark image, high intensity image etc. When these images are compressed using Back-propagation Network, it takes longer time to converge. The reason for this is, the given image may contain a number of distinct gray levels with narrow difference with their neighborhood pixels. If the gray levels of the pixels in an image and their neighbors are mapped in such a way that the difference in the gray levels of the neighbors with the pixel is minimum, then compression ratio as well as the convergence of the network can be improved. To achieve this, a Cumulative distribution function is estimated for the image and it is used to map the image pixels. When the mapped image pixels are used, the Back-propagation Neural Network yields high compression ratio as well as it converges quickly. Keywords—Back-propagation Neural Network, Cumulative Distribution Function, Correlation, Convergence.

Journal ArticleDOI
TL;DR: The fixed variable elicitation method was slightly faster and preferred by most participants and showed slight but consistent superiority for the fixed variable method along several dimensions such as monotonicity, accuracy, and precision of the estimated fractiles.
Abstract: We present the results of an experiment comparing two popular methods for encoding probability distributions of continuous variables in decision analysis: eliciting values of a variable, X, through comparisons with a fixed probability wheel and eliciting the percentiles of the cumulative distribution, F(X), through comparisons with fixed values of the variable. We show slight but consistent superiority for the fixed variable method along several dimensions such as monotonicity, accuracy, and precision of the estimated fractiles. The fixed variable elicitation method was also slightly faster and preferred by most participants. We discuss the reasons for its superiority and conclude with several recommendations for the practice of probability assessment.

Proceedings ArticleDOI
10 Oct 2008
TL;DR: A new type of characterization of multivariate random quantities, the so called localized cumulative distribution (LCD) that, in contrast to the conventional definition of a cumulative distribution, is unique and symmetric is introduced.
Abstract: This paper is concerned with distances for comparing multivariate random vectors with a special focus on the case that at least one of the random vectors is of discrete type, i.e., assumes values from a discrete set only. The first contribution is a new type of characterization of multivariate random quantities, the so called localized cumulative distribution (LCD) that, in contrast to the conventional definition of a cumulative distribution, is unique and symmetric. Based on the LCDs of the random vectors under consideration, the second contribution is the definition of generalized distance measures that are suitable for the multivariate case. These distances are used for both analysis and synthesis purposes. Analysis is concerned with assessing whether a given sample stems from a given continuous distribution. Synthesis is concerned with both density estimation, i.e., calculating a suitable continuous approximation of a given sample, and density discretization, i.e., approximation of a given continuous random vector by a discrete one.

Journal ArticleDOI
TL;DR: A general method for deriving maximally informative sigmoidal tuning curves for neural systems with small normalized variability is presented and it is shown that maximum mutual information corresponds to constant Fisher information only if the stimulus is uniformly distributed.
Abstract: A general method for deriving maximally informative sigmoidal tuning curves for neural systems with small normalized variability is presented. The optimal tuning curve is a nonlinear function of the cumulative distribution function of the stimulus and depends on the mean-variance relationship of the neural system. The derivation is based on a known relationship between Shannon's mutual information and Fisher information, and the optimality of Jeffrey's prior. It relies on the existence of closed-form solutions to the converse problem of optimizing the stimulus distribution for a given tuning curve. It is shown that maximum mutual information corresponds to constant Fisher information only if the stimulus is uniformly distributed. As an example, the case of sub-Poisson binomial firing statistics is analyzed in detail.

Journal ArticleDOI
TL;DR: Based on and as an extension of such an approach, simple precise approximations for the performance metrics of equal-gain combining and maximal-ratio combining receivers operating on i.i.d. alpha-mu fading channels are proposed.
Abstract: Sums of fading envelopes occur in several wireless communications applications, such as equal-gain combining, signal detection, outage probability, intersymbol interference, etc. The exact evaluation of the sum statistics is known to be very intricate. One of the purposes of this Letter is to provide highly accurate closed-form approximations to the probability density function and cumulative distribution function of the sum of independent identically distributed (i.i.d.) alpha-mu (generalized gamma) variates. Based on and as an extension of such an approach, simple precise approximations for the performance metrics of equal-gain combining and maximal-ratio combining receivers operating on i.i.d. alpha-mu fading channels are proposed. Samples examples are given to illustrate that, for practical purposes, exact and approximate solutions are indistinguishable from each other.

Journal ArticleDOI
TL;DR: In this paper, stress and strength are treated as discrete random variables, and a discrete SSI model is presented by using the universal generating function (UGF) method to demonstrate the validity of the discrete model in a variety of circumstances.

Journal ArticleDOI
TL;DR: Ability of the Rician model to describe fading in wireless communications is the reason to derive infinity-series representations for both the probability density function and the cumulative distribution function of output signal-to-interference ratio at the dual SC receiver over correlated Rician fading channels.
Abstract: Dual-diversity receiver employing selection combining (SC) is often used in wireless communication systems due to its simplicity. Ability of the Rician model to describe fading in wireless communications is the reason to derive infinity-series representations for both the probability density function (PDF) and the cumulative distribution function (CDF) of output signal-to-interference ratio (SIR) at the dual SC receiver over correlated Rician fading channels in the presence of correlated Rayleigh distributed cochannel interference (CCI). These expressions are used to study wireless system performance criteria, such as outage probability and average bit error probability (ABEP).

Journal ArticleDOI
TL;DR: The upper and lower bounds for the cumulative distribution function of the signal-to-noise ratio per hop are derived, and their probability density function and moment generating function are provided.
Abstract: Opportunistic relaying with cooperative diversity has been introduced in as a simple alternative protocol to the distributed space-time coded protocol while achieving the same diversity-multiplexing tradeoff performance as a point-to-point multiple-input multiple-output scheme. In this paper, we derive the upper and lower bounds for the cumulative distribution function of the signal-to-noise ratio per hop, and then provide their probability density function and moment generating function.We also derive approximate but tight closed-form expressions for the lower and upper bounds of the outage probability of opportunistic relaying based on decode-and-forward transmission under the assumption of unbalanced/balanced hops and independent, identically distributed Rayleigh fading channels.

Journal ArticleDOI
TL;DR: The purpose of this tutorial is to demonstrate the potential value of the Performance versus Intensity (PI) function in both research and clinical settings, as it shows the cumulative distribution of useful speech information across the amplitude domain, as speech rises from inaudibility to full audibility.
Abstract: The purpose of this tutorial is to demonstrate the potential value of the Performance versus Intensity (PI) function in both research and clinical settings. The PI function describes recognition probability as a function of average speech amplitude. In effect, it shows the cumulative distribution of useful speech information across the amplitude domain, as speech rises from inaudibility to full audibility. The basic PI function can be modeled by a cubed exponential function with three free parameters representing: (a) threshold of initial audibility, (b) amplitude range from initial to full audibility, and (c) recognition probability at full audibility. Phoneme scoring of responses to consonant-vowel-consonant words makes it possible to obtain complete PI functions in a reasonably short time with acceptable test-retest reliability. Two examples of research applications are shown here: (a) the preclinical behavioral evaluation of compression amplification schemes, and (b) assessment of the distribution of reverberation effects in the amplitude domain. Three examples of clinical application show data from adults with different degrees and configurations of sensorineural hearing loss. In all three cases, the PI function provides potentially useful information over and above that which would be obtained from measurement of Speech Reception Threshold and Maximum word recognition in Phonectically Balanced lists. Clinical application can be simplified by appropriate software and by a routine to convert phoneme recognition scores into estimates of the more familiar whole-word recognition scores. By making assumptions about context effects, phoneme recognition scores can also be used to estimate word recognition in sentences. It is hard to escape the conclusion that the PI function is an easily available, potentially valuable, but largely neglected resource for both hearing research and clinical audiology.

Journal Article
TL;DR: In this paper, the g-and-h transformations are derived in general parametric form and the g and h parameters can be determined for prespecified values of skew and kurtosis.
Abstract: The family of g-and-h transformations are popular algorithms used for simulating non-normal distributions because of their simplicity and ease of execution. In general, two limitations associated with g-andh transformations are that their probability density functions (pdfs) and cumulative distribution functions (cdfs) are unknown. In view of this, the g-and-h transformations’ pdfs and cdfs are derived in general parametric form. Moments are also derived and it is subsequently shown how the g and h parameters can be determined for prespecified values of skew and kurtosis. Numerical examples and parametric plots of g-and-h pdfs and cdfs are provided to confirm and demonstrate the methodology. It is also shown how g-and-h distributions can be used in the context of distribution fitting using real data sets.

Journal ArticleDOI
29 Feb 2008-Extremes
TL;DR: In this article, it was shown that max-stable random vectors with unit Frechet marginals are in one to one correspondence with convex sets K in [0, ∞ )====== d called max-zonoids, which can be characterised as sets obtained as limits of Minkowski sums of cross-polytopes or as the selection expectation of a random crosspolytope whose distribution is controlled by the spectral measure of the max-safe vector.
Abstract: It is shown that max-stable random vectors in [0, ∞ ) d with unit Frechet marginals are in one to one correspondence with convex sets K in [0, ∞ ) d called max-zonoids. The max-zonoids can be characterised as sets obtained as limits of Minkowski sums of cross-polytopes or, alternatively, as the selection expectation of a random cross-polytope whose distribution is controlled by the spectral measure of the max-stable random vector. Furthermore, the cumulative distribution function P ξ ≤ x of a max-stable random vector ξ with unit Frechet marginals is determined by the norm of the inverse to x, where all possible norms are given by the support functions of (normalised) max-zonoids. As an application, geometrical interpretations of a number of well-known concepts from the theory of multivariate extreme values and copulas are provided.

Journal ArticleDOI
TL;DR: In this article, a non-parametric approach to size characterization was supported by a special method of uncertainty analysis based on resampling techniques, which provided point and interval estimates of size distribution functions and related hazard parameters.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the condensation transition from a different aspect, that of extreme value statistics, and derived the cumulative distribution of the largest mass in the system and computed its asymptotic behaviour.
Abstract: We study the factorized steady state of a general class of mass transport models in which mass, a conserved quantity, is transferred stochastically between sites. Condensation in such models is exhibited when above a critical mass density the marginal distribution for the mass at a single site develops a bump, pcond(m), at large mass m. This bump corresponds to a condensate site carrying a finite fraction of the mass in the system. Here, we study the condensation transition from a different aspect, that of extreme value statistics. We consider the cumulative distribution of the largest mass in the system and compute its asymptotic behaviour. We show three distinct behaviours: at subcritical densities the distribution is Gumbel; at the critical density the distribution is Frechet, and above the critical density a different distribution emerges. We relate pcond(m) to the probability density of the largest mass in the system.

Dissertation
Osman Hasan1
01 Jan 2008
TL;DR: A framework that can be used to formalize and verify any continuous random variable for which the inverse of the cumulative distribution function can be expressed in a closed mathematical form is presented and a formalization infrastructure that allows us to formally reason about statistical properties for discrete random variables is provided.
Abstract: Probabilistic analysis is a tool of fundamental importance to virtually all scientists and engineers as they often have to deal with systems that exhibit random or unpredictable elements. Traditionally, computer simulation techniques are used to perform probabilistic analysis. However, they provide less accurate results and cannot handle large-scale problems due to their enormous computer processing time requirements. To overcome these limitations, this thesis proposes to perform probabilistic analysis by formally specifying the behavior of random systems in higher-order logic and use these models for verifying the intended probabilistic and statistical properties in a computer based theorem prover. The analysis carried out in this way is free from any approximation or precision issues due to the mathematical nature of the models and the inherent soundness of the theorem proving approach. The thesis mainly targets the two most essential components for this task, i.e., the higher-order-logic formalization of random variables and the ability to formally verify the probabilistic and statistical properties of these random variables within a theorem prover. We present a framework that can be used to formalize and verify any continuous random variable for which the inverse of the cumulative distribution function can be expressed in a closed mathematical form. Similarly, we provide a formalization infrastructure that allows us to formally reason about statistical properties, such as mean, variance and tail distribution bounds, for discrete random variables. In order to in illustrate the practical effectiveness of the proposed approach, we consider the probabilistic analysis of three examples: the Coupon Collector's problem, the roundoff error in a digital processor and the Stop-and-Wait protocol. All the above mentioned work is conducted using the HOL theorem prover.

Journal ArticleDOI
TL;DR: A performance evaluation and measurement of a number of heterogeneous end-to-end paths taking into account a wide range of statistics and it is shown how the measured statistics can be fruitfully used in the context of network control and management.

Journal ArticleDOI
TL;DR: In this article, the authors considered model selection, estimation and forecasting for a class of integer autoregressive models suitable for use when analysing time series count data, where any number of lags may be entertained, and estimation may be performed by likelihood methods.

Journal ArticleDOI
TL;DR: The dynamic reliability model of components is developed using order statistics, and probability differential equations, and shows that, when strength doesn't degenerate, the reliability of components decreases over time, and the hazard rate of components increases, too.
Abstract: The dynamic reliability model of components is developed using order statistics, and probability differential equations. The relationship between reliability and time, and that between the hazard rate and time, are discussed in this paper. First, according to the statistical meaning of random load application, the cumulative distribution function, and probability density function of equivalent load, when random load is applied at multiple times, are derived. Further, the reliability model of components under repeated random load, is developed. Then, the loading process described under a Poisson process, the dynamic reliability model of components without strength degeneration, and that with strength degeneration are developed respectively. Finally, the reliability, and the hazard rate of components are discussed. The result shows that, when strength doesn't degenerate, the reliability of components decreases over time, and the hazard rate of components decreases over time, too. When strength degenerates, the reliability of components decreases over time more obviously, and the hazard rate curve is bathtub-shaped.

Journal ArticleDOI
TL;DR: A simple method is introduced, based on Fourier series expansion and Kolmogorov tests, which overcomes the problem of inadequate histogram display when one deals with data drawn from continuous variables.

Journal ArticleDOI
TL;DR: Numerical results show that SH-R can achieve the same capacity as an ideal full feedback system when the number of users is large while it reduces the amount of feedback greatly, but ETE-R provides relatively poor performance.
Abstract: In this letter, we investigate amplify-and-forward relaying where a source is communicating with the best user through an intermediate relay that covers multiple users. To reduce the amount of feedback that is needed to select the best user, we consider two kinds of SNR-threshold based channel quality information (CQI) reporting: end-to-end SNR based reporting (ETE-R) and second-hop quality based reporting (SH-R). We derive the probability density function (pdf) and the cumulative density function (cdf) of the SNR received at the best user in Rayleigh fading channels, and compare the average capacity, the average number of feedback users and the feedback outage probability of the two reporting schemes, respectively. Numerical results show that SH-R can achieve the same capacity as an ideal full feedback system when the number of users is large while it reduces the amount of feedback greatly, but ETE-R provides relatively poor performance.