scispace - formally typeset
Search or ask a question

Showing papers on "Cumulative distribution function published in 1995"


Posted Content
TL;DR: In this paper, the authors consider the formal statistical procedures that could be used to assess the accuracy of value at risk (VaR) estimates and show that verification of the accuracy becomes substantially more difficult as the cumulative probability estimate being verified becomes smaller.
Abstract: Risk exposures are typically quantified in terms of a "value at risk" (VaR) estimate. A VaR estimate corresponds to a specific critical value of a portfolio's potential one-day profit and loss distribution. Given their functions both as internal risk management tools and as potential regulatory measures of risk exposure, it is important to assess and quantify the accuracy of an institution's VaR estimates. This study considers the formal statistical procedures that could be used to assess the accuracy of VaR estimates. The analysis demonstrates that verification of the accuracy of tail probability value estimates becomes substantially more difficult as the cumulative probability estimate being verified becomes smaller. In the extreme, it becomes virtually impossible to verify with any accuracy the potential losses associated with extremely rare events. Moreover, the economic importance of not being able to reliably detect an inaccurate model or an under-reporting institution potentially becomes much more pronounced as the cumulative probability estimate being verified becomes smaller. It does not appear possible for a bank or its supervisor to reliably verify the accuracy of an institution's internal model loss exposure estimates using standard statistical techniques. The results have implications both for banks that wish to assess the accuracy of their internal risk measurement models as well as for supervisors who must verify the accuracy of an institution's risk exposure estimate reported under an internal models approach to model risk.

2,042 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the formal statistical procedures that could be used to assess the accuracy of value at risk (VaR) estimates and show that verification of the accuracy becomes substantially more difficult as the cumulative probability estimate being verified becomes smaller.
Abstract: Risk exposures are typically quantified in terms of a "value at risk" (VaR) estimate. A VaR estimate corresponds to a specific critical value of a portfolio's potential one-day profit and loss distribution. Given their functions both as internal risk management tools and as potential regulatory measures of risk exposure, it is important to assess and quantify the accuracy of an institution's VaR estimates. This study considers the formal statistical procedures that could be used to assess the accuracy of VaR estimates. The analysis demonstrates that verification of the accuracy of tail probability value estimates becomes substantially more difficult as the cumulative probability estimate being verified becomes smaller. In the extreme, it becomes virtually impossible to verify with any accuracy the potential losses associated with extremely rare events. Moreover, the economic importance of not being able to reliably detect an inaccurate model or an under-reporting institution potentially becomes much more pronounced as the cumulative probability estimate being verified becomes smaller. It does not appear possible for a bank or its supervisor to reliably verify the accuracy of an institution's internal model loss exposure estimates using standard statistical techniques. The results have implications both for banks that wish to assess the accuracy of their internal risk measurement models as well as for supervisors who must verify the accuracy of an institution's risk exposure estimate reported under an internal models approach to model risk.

1,743 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the ensemble of random symmetric n×n matrices specified by an orthogonal invariant probability distribution and showed that the normalized eigenvalue counting function of this ensemble converges in probability to a nonrandom limit as n→∞ and that this limiting distribution is the solution of a certain self-consistent equation.
Abstract: We consider the ensemble of random symmetricn×n matrices specified by an orthogonal invariant probability distribution. We treat this distribution as a Gibbs measure of a mean-field-type model. This allows us to show that the normalized eigenvalue counting function of this ensemble converges in probability to a nonrandom limit asn→∞ and that this limiting distribution is the solution of a certain self-consistent equation.

169 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe the cumulative distribution function of excess returns conditional on a broad set of predictors that summarize the state of the economy by estimating a sequence of conditional logit models over a grid of values of the response variable.
Abstract: In this article we describe the cumulative distribution function of excess returns conditional on a broad set of predictors that summarize the state of the economy. We do so by estimating a sequence of conditional logit models over a grid of values of the response variable. Our method uncovers higher-order multidimensional structure that cannot be found by modeling only the first two moments of the distribution. We compare two approaches to modeling: one based on a conventional linear logit model and the other based on an additive logit. The second approach avoids the “curse of dimensionality” problem of fully nonparametric methods while retaining both interpretability and the ability to let the data determine the shape of the relationship between the response variable and the predictors. We find that the additive logit fits better and reveals aspects of the data that remain undetected by the linear logit. The additive model retains its superiority even in out-of-sample prediction and portfolio s...

155 citations


Posted Content
TL;DR: In this paper, an overrelaxed Markov chain Monte Carlo (MCMC) algorithm based on order statistics has been proposed, which can be applied whenever the full conditional distributions are such that their cumulative distribution functions and inverse cumulative distributions can be efficiently computed.
Abstract: Markov chain Monte Carlo methods such as Gibbs sampling and simple forms of the Metropolis algorithm typically move about the distribution being sampled via a random walk. For the complex, high-dimensional distributions commonly encountered in Bayesian inference and statistical physics, the distance moved in each iteration of these algorithms will usually be small, because it is difficult or impossible to transform the problem to eliminate dependencies between variables. The inefficiency inherent in taking such small steps is greatly exacerbated when the algorithm operates via a random walk, as in such a case moving to a point n steps away will typically take around n^2 iterations. Such random walks can sometimes be suppressed using ``overrelaxed'' variants of Gibbs sampling (a.k.a. the heatbath algorithm), but such methods have hitherto been largely restricted to problems where all the full conditional distributions are Gaussian. I present an overrelaxed Markov chain Monte Carlo algorithm based on order statistics that is more widely applicable. In particular, the algorithm can be applied whenever the full conditional distributions are such that their cumulative distribution functions and inverse cumulative distribution functions can be efficiently computed. The method is demonstrated on an inference problem for a simple hierarchical Bayesian model.

130 citations


Book
04 Jul 1995
TL;DR: A brief review of distributions can be found in this article, where the authors define the definition of Probability, conditional probability and multi-multiplication rule, and the central limit theorem.
Abstract: PRODUCT EFFECTIVENESS AND WORTHProduct AttributesProgrammatic FactorsProduct Effectiveness FactorsOperational AvailabilityDesign, Use, and Logistic EffectivenessDesign Effectiveness.. Use Effectiveness. Logistic EffectivenessReliabilityMission Reliability. Logistic ReliabilityRestorationMaintainabilityTime Elements and Product EffectivenessRelationships among Time IntervalsAssigning ResponsibilityIntegrated Product and Process DevelopmentManaging Product EffectivenessSummaryPROBABILITY CONCEPTSRandom EventsDefinitions of ProbabilityBasic Theorems of ProbabilityConditional Probability and Multiplication Rule. Statistical Independence. Total Probability Theorem. Bayes' TheoremRandom Variables and Their DistributionsRandom VariablesProbability DistributionMain Descriptors of a Random VariableMean. Variance and Standard Deviation. Markov Inequality. Chebyshev Inequality. Skewness. Quantiles and PercentilesBrief Review of DistributionsDiscrete Distributions. Continuous DistributionsMultiple Random VariablesJoint Probability. Conditional Probability DistributionsCovariance and CorrelationFunctions of Random VariablesProbability DistributionsMain Descriptors of Random FunctionsRandom ProcessesDefinition of a Random ProcessMain Descriptors of Random ProcessMean Value. VarianceStationary Random ProcessesErgodicity of Random ProcessesCounting ProcessesRecurrent Point ProcessesMarkov ProcessSome Limit Results in Probability and Stochastic ProcessesLimit Theorems in Probability TheoryThe Central Limit Theorem. The Poisson Theorem. A Random Number of Random Variables in a SumStochastic ProcessesCrossing a "High Level" by Continuous Process. Thinning a Point ProcessThe Superposition of Point ProcessesSTATISTICAL INFERENCE CONCEPTSStatistical EstimationPoint EstimationMethod of Moments. Method of Maximum LikelihoodInterval EstimationHypothesis TestingFrequency HistogramGoodness-of-Fit TestsThe Chi-Square Test. The Kolmogorov-Smirnov Test. Sample ComparisonReliability Regression Model FittingGauss-Markov Theorem and Linear RegressionRegression Analysis. The Gauss-Markov Theorem. Multiple Linear RegressionProportional Hazard (PH) and Accelerated Life (AL) ModelsAccelerated Life (AL) Model. Proportional Hazard (PH) ModelAccelerated Life Regression for Constant StressAccelerated Life Regression for Time-Dependent StressPRACTICAL RELIABILITY CONCEPTSReliability MeasuresTime-to-Failure Distribution and Reliability FunctionMean Time to Failure and Percentile LifeFailure Rate and Cumulative Hazard FunctionLife Distributions as Reliability ModelsGeometric DistributionThe Binomial DistributionExponential DistributionClasses of Distribution Functions Based on AgingFailure Rate and the Notion of Aging. Bounds on Reliability for Aging Distributions. Inequality for Coefficient of Variation. Cumulative Damage ModelThe Weibu

129 citations


Journal ArticleDOI
TL;DR: In this paper, the authors relax the assumption that the cumulative distribution function of the demand is completely known and merely assume that the first two moments of the distribution function are known, allowing customers to balk when inventory level is low.
Abstract: The purpose of this paper is to study the classic newsboy model with more realistic assumptions. First, we allow customers to balk when inventory level is low. Secondly, we relax the assumption that the cumulative distribution function of the demand is completely known and merely assume that the first two moments of the distribution function are known.

122 citations


Journal ArticleDOI
TL;DR: The reverse cumulative distribution plot is a graphic tool that completely displays all the data, allows a rapid visual assessment of important details of the distribution, and simplifies comparison of distributions.
Abstract: Serologic data often have a wide range and commonly do not approximate a normal distribution. Means, medians, SDs, or other conventional numerical summaries of antibody data may not adequately or fully describe these complex data. The reverse cumulative distribution plot is a graphic tool that completely displays all the data, allows a rapid visual assessment of important details of the distribution, and simplifies comparison of distributions.

94 citations


Posted Content
Mark Yuying An1
TL;DR: In this paper, a broad class of log-concave probability distributions that arise in economics of uncertainty and information are studied and simple non-parametric testing procedures for logconcavity are proposed.
Abstract: This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the dierentiability of density functions. Discrete and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implications of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The tests for increasing hazard rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeding’s U-statistics.

80 citations


Journal ArticleDOI
TL;DR: In this paper, numerical inversion of the characteristic function is used as a tool for obtaining cumulative distribution functions, which is suitable for instructional purposes, particularly in the illustration of the inversion theorems covered in graduate probability courses.
Abstract: We review and discuss numerical inversion of the characteristic function as a tool for obtaining cumulative distribution functions. With the availability of high-speed computing and symbolic computation software, the method is ideally suited for instructional purposes, particularly in the illustration of the inversion theorems covered in graduate probability courses. The method is also available as an alternative to asymptotic approximations, Monte Carlo, or bootstrap techniques when analytic expressions for the distribution function are not available. We illustrate the method with several examples, including one which is concerned with the detection of possible clusters of disease in an epidemiologic study.

79 citations


Journal ArticleDOI
TL;DR: In this paper, a countable set of uncorrelated random variables is obtained from an arbitary continuous random variable X, and the properties of these variables allow us to regard them as principal axes for X with respect to the distance function d ( u, v ) = [formula].

Journal ArticleDOI
TL;DR: In this article, a conservative finite-sample simultaneous confidence envelope for a density can be found by solving a finite set of finite-dimensional linear programming problems if the density is known to be monotonic or to have at most $k$ modes relative to a positive weight function.
Abstract: A conservative finite-sample simultaneous confidence envelope for a density can be found by solving a finite set of finite-dimensional linear programming problems if the density is known to be monotonic or to have at most $k$ modes relative to a positive weight function. The dimension of the problems is at most $(n/\log n)^{1/3}$, where $n$ is the number of observations. The linear programs find densities attaining the largest and smallest values at a point among cumulative distribution functions in a confidence set defined using the assumed shape restriction and differences between the empirical cumulative distribution function evaluated at a subset of the observed points. Bounds at any finite set of points can be extrapolated conservatively using the shape restriction. The optima are attained by densities piecewise proportional to the weight function with discontinuities at a subset of the observations and at most five other points. If the weight function is constant and the density satisfies a local Lipschitz condition with exponent $\varrho$, the width of the bounds converges to zero at the optimal rate $(\log n/n)^{\varrho/(1+2\varrho)}$ outside every neighborhood of the set of modes, if a "bandwidth" parameter is chosen correctly. The integrated width of the bounds converges at the same rate on intervals where the density satisfies a Lipschitz condition if the intervals are strictly within the support of the density. The approach also gives algorithms to compute confidence intervals for the support of monotonic densities and for the mode of unimodal densities, lower confidence intervals on the number of modes of a distribution and conservative tests of the hypothesis of $k$-modality. We use the method to compute confidence bounds for the probability density of aftershocks of the 1984 Morgan Hill, CA, earthquake, assuming aftershock times are an inhomogeneous Poisson point process with decreasing intensity.

Journal ArticleDOI
TL;DR: Partial-sums relations for probability distributions on a finite grid of evenly-spaced points are considered, including effects of grid refinements and extensions, and their relationships to the traditional relations are described.
Abstract: Special stochastic-dominance relations for probability distributions on a finite grid of evenly-spaced points are considered. The relations depend solely on iterated partial sums of grid-point probabilities and are very computer efficient. Their corresponding classes of utility functions for expected-utility comparisons consist of functions defined on the grid that mimic in the large the traditional continuous functions whose derivatives alternate in sign. The first-degree and second-degree relations are identical to their traditional counterparts defined from iterated integrals of cumulative distribution functions. The higher-degree relations differ from the traditional relations in interesting and sometimes subtle ways. The paper explores aspects of the partial-sums relations, including effects of grid refinements and extensions, and describes their relationships to the traditional relations.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method for estimating the parameters and quantiles of continuous distributions in two steps: first, some elemental estimates are obtained by solving equations relating the cumulative distribution function or the survival function to their percentile values for some elemental subsets of the observations.

Journal ArticleDOI
TL;DR: In this paper, the authors derived the fatigue strength distribution as a function of the number of cycles to failure in the stationary random loading process and used the stress-strength interference technique to calculate reliability.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an algorithm to obtain good estimates of the three-parameter Weibull distribution in eight steps and it depends on the Simple Iteration Procedure, which always converges, converges fast and does not depend on any conditions, whatsoever.

Journal ArticleDOI
TL;DR: In this paper, it is shown that existing methods can lead to a greater chance of falsely rejecting the fit of the negative exponential model and inferring that fire frequencies have changed through time.
Abstract: This paper reviews methods used for testing the fit of the cumulative form of a negative exponential distribution to the cumulative distribution of forest age-classes. It is shown that existing methods can lead to a greater chance of falsely rejecting the fit of the negative exponential model and inferring that fire frequencies have changed through time. This results when the old-age tail of a negative exponential distribution is mathematically assumed to be present at the end of the age-class distribution. In reality, the tail is censored from sample distributions of forest age-classes. Censoring alters the shape of a cumulative age-class distribution from the straight line expected for a semi-log graph of the cumulative negative exponential model. A solution to this problem is proposed that restricts the tests-of-fit to the portion of the negative exponential distribution that overlaps with the data to be tested. The cumulative age-class distribution can then be compared directly with the cumulative of a truncated negative exponential distribution. Considerations for interpreting a poor fit are then discussed.

Journal ArticleDOI
TL;DR: This paper showed that the MPH model is even identifiable from two-sided censored observations, and showed that uniformly consistent estimators do not exist for the MPH estimators, and that the triples do not depend continuously on the observed cumulative distributions.
Abstract: We give a new proof of the identifiably of the MPH model. This proof is constructive: it is a recipe for constructing the triple—regression function, base-line hazard, and distribution of the individual effect—from the observed cumulative distribution functions. We then prove that the triples do not depend continuously on the observed cumulative distribution functions. Uniformly consistent estimators do not exist. Finally we show that the MPH model is even identifiable from two-sided censored observations. This proof is constructive, too.

Posted Content
TL;DR: In this paper, the authors consider the formal statistical procedures that could be used to assess the accuracy of value at risk (VaR) estimates and show that verification of the accuracy becomes substantially more difficult as the cumulative probability estimate being verified becomes smaller.
Abstract: Risk exposures are typically quantified in terms of a "value at risk" (VaR) estimate. A VaR estimate corresponds to a specific critical value of a portfolio's potential one-day profit and loss distribution. Given their functions both as internal risk management tools and as potential regulatory measures of risk exposure, it is important to assess and quantify the accuracy of an institution's VaR estimates. This study considers the formal statistical procedures that could be used to assess the accuracy of VaR estimates. The analysis demonstrates that verification of the accuracy of tail probability value estimates becomes substantially more difficult as the cumulative probability estimate being verified becomes smaller. In the extreme, it becomes virtually impossible to verify with any accuracy the potential losses associated with extremely rare events. Moreover, the economic importance of not being able to reliably detect an inaccurate model or an under-reporting institution potentially becomes much more pronounced as the cumulative probability estimate being verified becomes smaller. It does not appear possible for a bank or its supervisor to reliably verify the accuracy of an institution's internal model loss exposure estimates using standard statistical techniques. The results have implications both for banks that wish to assess the accuracy of their internal risk measurement models as well as for supervisors who must verify the accuracy of an institution's risk exposure estimate reported under an internal models approach to model risk.

Journal ArticleDOI
TL;DR: In this article, a statistical method for predicting the annual cumulative probability distribution for the magnetic field of a set of power transmission lines is presented, which is necessary because of both random and more predictable variations of current on the lines.
Abstract: A statistical method for predicting the annual cumulative probability distribution for the magnetic field of a set of power transmission lines is presented. This is necessary because of both random and more predictable variations of current on the lines. Also described is a simple method for calculating the currents induced on and magnetic fields of a series impedance loaded loop which is inductively coupled to the power lines. A program (MAGFLD 2.0) has been written to incorporate these methods and has been validated theoretically and experimentally by comparison to other programs and long term measured transmission line data respectively. >

Journal ArticleDOI
TL;DR: In this paper, the authors considered distribution functions such that −log F(x−) = sup (xy − log Φ(y), y ≻ 0)(1 + o(1)) as x → ∞, where F (x −) := (1 − F)(x −).

Journal ArticleDOI
TL;DR: In this paper, the necessary and sufficient conditions for any real, continuous and strictly monotonic function ξ(x) to be equal to or equal to any real function x is given.
Abstract: Let be the order statistics of a sample of size n≥ 2 from a population with continuous distribution function F. In this paper, we obtain the distribution function F from conditional expectation or . where h is a real, continuous and strictly monotonic function. We give the necessary and sufficient conditions so that any real function ξ(x) is equal to or is equal to . Different continuous distributions are also characterized using our results.

Journal ArticleDOI
TL;DR: A simple algorithm, based on a sharp error bound, is proposed for computing the cumulative distribution function (cdf) of a noncentral beta random variable when the noncentrality is associated only with the denominator χ2 and its computational details are discussed.
Abstract: The noncentral beta and the related noncentral F distributions have received much attention during the last decade, as is evident from the works of Norton, Lenth, Frick, Lee, Posten, Chattamvelli, and Chattamvelli and Shanmugam. This article reviews the existing algorithms for computing the cumulative distribution function (cdf) of a noncentral beta random variable, and proposes a simple algorithm, based on a sharp error bound, for computing the cdf. A variation of the noncentral beta random variable when the noncentrality is associated only with the denominator χ2 and its computational details are also discussed.

Journal ArticleDOI
TL;DR: In this paper, the authors define conditions for increasing risk when the utility functions of risk averse investors are characterized by decreasing absolute risk aversion (DARA), and prove that for DARA utility functions, Y is riskier than X if and only if G, the cumulative distribution function of Y, can be formed from F, under the restrictions stated in the paper, by adding a series of mean-preserving spread (MPSA) steps.

Proceedings ArticleDOI
09 May 1995
TL;DR: The purpose of this paper is to highlight and resolve problems with these tests and to improve performance so that the test is competitive with, and in some cases better than, the most powerful known tests for Gaussianity.
Abstract: We wish to formulate a test for the hypothesis X/sub i//spl sim/N(/spl mu/,/spl sigma//sup 2/) for i=0,1,...,N-1 against unspecified alternatives. We assume independence of the components of X=[X/sub 0/,X/sub 1/,...,X/sub N-1/]. This is a problem of universal importance as the assumption of Gaussianity is prevalent and fundamental to many statistical theories and engineering applications. Many such tests exist, the most well-known being the /spl chi//sup 2/ goodness-of-fit test with its variants and the Kolmogorov-Smirnov one-sample cumulative probability function test. More powerful modern tests for the hypothesis of Gaussianity include the D'Agostino (1977) K/sup 2/ and Shapiro-Wilk (1968) W tests. Tests for Gaussianity have been proposed which use the characteristic function. It is the purpose of this paper to highlight and resolve problems with these tests and to improve performance so that the test is competitive with, and in some cases better than, the most powerful known tests for Gaussianity.

Journal ArticleDOI
Suojin Wang1
TL;DR: In this paper, two simple one-step methods for approximating distribution quantiles of statistics are derived by approximately inverting saddlepoint formulas for the cumulative distribution functions of the sample mean, which are suitable for use with a pocket calculator.

Journal ArticleDOI
TL;DR: The Rule of Thumbs for Gamblers as mentioned in this paper is a generalization of the Rule of Varying Gamblers (ROWG) for the case of variance, and it is shown that the strategy with the smaller variance is more favorable to the winner.

DOI
11 Aug 1995
TL;DR: In this article, a probability density function applicable to waves in finite water depth (which can be considered to be a nonlinear, non-Gaussian random process) in closed form is presented.
Abstract: This paper presents the development of a probability density function applicable to waves in finite water depth (which can be considered to be a nonlinear, non-Gaussian random process) in closed form. The derivation of the density function is based on the Kac- Siegert solution developed for a nonlinear mechanical system, but the parameters involved in the solution are evaluated from the wave record only. Further, the probability density function is asymptotically expressed in closed form. Comparisons between the presently developed probability density function and histograms constructed from wave records show good agreement.

Journal Article
TL;DR: In this paper, a field trial of the use of a simple statistical extrapolation technique for the determination of design load effects in existing bridges is described, and the cumulative distribution function is then used to estimate the level of deflection with a 1000-year return period.
Abstract: The paper describes a field trial of the use of a simple statistical extrapolation technique for the determination of design load effects in existing bridges. Deflections were measured directly using lasers in the Foyle Bridge, and data were recorded for 155 daily 48-min samples. As only traffic load effects were of interest, wind-induced deflections were removed by Fast Fourier transform analysis and temperature-induced deflections were removed through identification of traffic-free periods. A simple linear regression analysis using probability paper has been employed to determine the parameters which characterise the statistical distribution. The cumulative distribution function was then used to estimate the level of deflection with a 1000-year return period. Empirically derived formulas have been utilised to determine the variability in the 1000- year estimates and to calculate a design deflection which allows for this. (A)

Proceedings ArticleDOI
17 Mar 1995
TL;DR: A PDF called fuzzy-random PDF is proposed in this paper based on considering the combined effects of both cognitive and non-cognitive uncertainties for the variable, assumed to have a fuzzy mean and a non-fuzzy standard deviation.
Abstract: Both cognitive and noncognitive uncertainties can be present in the same variable. The non-cognitive uncertainty of a variable can be described by its own probability density function (PDF); whereas the cognitive uncertainty of a random variable can be described by the membership function for its fuzziness and its /spl alpha/-cuts. A PDF called fuzzy-random PDF is proposed in this paper based on considering the combined effects of both cognitive and non-cognitive uncertainties for the variable. The variable is assumed to have a fuzzy mean and a non-fuzzy standard deviation. The fuzzy-random PDF is defined as the marginal density function of the multiplication of its normalized membership function and its random distribution. Relationships for the means and variances among the fuzzy-random distribution, normalized membership function, and random distribution were developed. The moments method and discrete method were proposed for dealing with the fuzzy-random PDF.