scispace - formally typeset
Search or ask a question

Showing papers on "Cumulative distribution function published in 1991"


Book
01 Jan 1991
TL;DR: This paper presents an overview of the global optimization problem, a survey of the approaches for its solution, and some ways of applying statistical procedures to construct global random search algorithms.
Abstract: 1 Global Optimization: An Overview.- 1. Global Optimization Theory: General Concepts.- 1.1. Statements of the global optimization problem.- 1.2. Types of prior information about the objective function and a classification of methods.- 1.2.1. Types of prior information.- 1.2.2. Classification of principal approaches and methods of the global optimization.- 1.2.3. General properties of multiextremal functions.- 1.3. Comparison and practical use of global optimization algorithms.- 1.3.1. Numerical comparison.- 1.3.2. Theoretical comparison criteria.- 1.3.3. Practical optimization problems.- 2. Global Optimization Methods.- 2.1. Global optimization algorithms based on the use of local search techniques.- 2.1.1. Local optimization algorithms.- 2.1.2. Use of local algorithms in constructing global optimization strategies.- 2.1.3. Multistart.- 2.1.4. Tunneling algorithms.- 2.1.5. Methods of transition from one local minimizer into another.- 2.1.6. Algorithms based on smoothing the objective function.- 2.2. Set covering methods.- 2.2.1. Grid algorithms (Passive coverings).- 2.2.2. Sequential covering methods.- 2.2.3. Optimality of global minimization algorithms.- 2.3. One-dimensional optimization, reduction and partition techniques.- 2.3.1. One-dimensional global optimization.- 2.3.2. Dimension reduction in multiextremal problems.- 2.3.3. Reducing global optimization to other problems in computational mathematics.- 2.3.4. Branch and bound methods.- 2.4. An approach based on stochastic and axiomatic models of the objective function.- 2.4.1. Stochastic models.- 2.4.2. Global optimization methods based on stochastic models.- 2.4.3. The Wiener process case.- 2.4.4. Axiomatic approach.- 2.4.5. Information-statistical approach.- 2. Global Random Search.- 3. Main Concepts and Approaches of Global Random Search.- 3.1. Construction of global random search algorithms: Basic approaches.- 3.1.1. Uniform random sampling.- 3.1.2. General (nonuniform) random sampling.- 3.1.3. Ways of improving the efficiency of random sampling algorithms.- 3.1.4. Random coverings.- 3.1.5. Formal scheme of global random search.- 3.1.6. Local behaviour of global random search algorithm.- 3.2. General results on the convergence of global random search algorithms.- 3.3. Markovian algorithms.- 3.3.1. General scheme of Markovian algorithms.- 3.3.2. Simulated annealing.- 3.3.3. Methods based on solving stochastic differential equations.- 3.3.4. Global stochastic approximation: Zielinski's method.- 3.3.5. Convergence rate of Baba's algorithm.- 3.3.6. The case of high dimension.- 4. Statistical Inference in Global Random Search.- 4.1. Some ways of applying statistical procedures to construct global random search algorithms.- 4.1.1. Regression analysis and design.- 4.1.2. Cluster analysis and pattern recognition.- 4.1.3. Estimation of the cumulative distribution function, its density, mode and level surfaces.- 4.1.4. Statistical modelling (Monte Carlo method).- 4.1.5. Design of experiments.- 4.2. Statistical inference concerning the maximum of a function.- 4.2.1. Statement of the problem and a survey of the approaches for its solution.- 4.2.2. Statistical inference construction for estimating M.- 4.2.3. Statistical inference for M, when the value of the tail index ? is known.- 4.2.4. Statistical inference, when the value of the tail index ? is unknown.- 4.2.5. Estimation of F(t).- 4.2.6. Prior determination of the value of the tail index ?.- 4.2.7. Exponential complexity of the uniform random sampling algorithm.- 4.3. Branch and probability bound methods.- 4.3.1. Prospectiveness criteria.- 4.3.2. The essence of branch and bound procedures.- 4.3.3. Principal construction of branch and probability bound methods.- 4.3.4. Typical variants of the branch and probability bound method.- 4.4. Stratified sampling.- 4.4.1. Organization of stratified sampling.- 4.4.2. Statistical inference for the maximum of a function based on its values at the points of stratified sample.- 4.4.3. Dominance of stratified over independent sampling.- 4.5. Statistical inference in random multistart.- 4.5.1. Problem statement.- 4.5.2. Bounded number of local maximizers.- 4.5.3. Bayesian approach.- 4.6. An approach based on distributions neutral to the right.- 4.6.1. Random distributions neutral to the right and their properties ...- 4.6.2. Bayesian testing about quantiles of random distributions.- 4.6.3. Application of distributions neutral to the right to construct global random search algorithms.- 5. Methods of Generations.- 5.1. Description of algorithms and formulation of the basic probabilistic model.- 5.1.1. Algorithms.- 5.1.2. The basic probabilistic model.- 5.2. Convergence of probability measure sequences generated by the basic model.- 5.2.1. Assumptions.- 5.2.2. Auxiliary statements.- 5.2.3. Convergence of the sequences (5.2.7) and (5.2.8) to?*(dx).- 5.3. Methods of generations for eigen-measure functional estimation of linear integral operators.- 5.3.1. Eigen-measures of linear integral operators.- 5.3.2. Closeness of eigen-measures to ?* (dx).- 5.3.3. Description of the generation methods.- 5.3.4. Convergence and rate of convergence of the generation methods.- 5.4. Sequential analogues of the methods of generations.- 5.4.1. Functionals of eigen-measures.- 5.4.2. Sequential maximization algorithms.- 5.4.3. Narrowing the search area.- 6. Random Search Algorithms for Solving Specific Problems.- 6.1. Distribution sampling in random search algorithms for solving constrained optimization problems.- 6.1.1. Basic concepts.- 6.1.2. Properties of D(x).- 6.1.3. General remarks on sampling.- 6.1.4. Manifold defined by linear constraints.- 6.1.5. Uniform distribution on an ellipsoid.- 6.1.6. Sampling on a hyperboloid.- 6.1.7. Sampling on a paraboloid.- 6.1.8. Sampling on a cone.- 6.2. Random search algorithm construction for optimization in functional spaces, in discrete and in multicriterial problems.- 6.2.1. Optimization in functional spaces.- 6.2.2. Random search in multicriterial optimization problems.- 6.2.3. Discrete optimization.- 6.2.4. Relative efficiency of discrete random search.- 3. Auxiliary Results.- 7. Statistical Inference for the Bounds of Random Variables.- 7.1. Statistical inference when the tail index of the extreme value distribution is known.- 7.1.1. Motivation and problem statement.- 7.1.2. Auxiliary statements.- 7.1.3. Estimation of M.- 7.1.4. Confidence intervals for M.- 7.1.5. Testing statistical hypotheses about M.- 7.2. Statistical inference when the tail index is unknown.- 7.2.1. Statistical inference for M.- 7.2.2. Estimation of ?.- 7.2.3. Construction of confidence intervals and statistical hypothesis test for ?.- 7.3. Asymptotic properties of optimal linear estimates.- 7.3.1. Results and consequences.- 7.3.2. Auxiliary statements and proofs of Theorem 7.3.2 and Proposition 7.1.3.- 7.3.3. Proof of Theorem 7.3.1.- 8. Several Problems Connected with Global Random Search.- 8.1. Optimal design in extremal experiments.- 8.1.1. Extremal experiment design.- 8.1.2. Optimal selection of the search direction.- 8.1.3. Experimental design applying the search direction (8.1.15).- 8.2. Optimal simultaneous estimation of several integrals by the Monte Carlo method.- 8.2.1. Problem statement.- 8.2.2. Assumptions.- 8.2.3. Existence and uniqueness of optimal densities.- 8.2.4. Necessary and sufficient optimality conditions.- 8.2.5. Construction and structure of optimal densities.- 8.2.6. Structure of optimal densities for nondifferentiable criteria.- 8.2.7. Connection with the regression design theory.- 8.3. Projection estimation of multivariate regression.- 8.3.1. Problem statement.- 8.3.2. Bias and random inaccuracies of nonparametric estimates.- 8.3.3. Examples of asymptotically optimal projection procedures with deterministic designs.- 8.3.4. Projection estimation via evaluations at random points.- References.

321 citations


Book ChapterDOI
01 Jan 1991
TL;DR: The significance, ubiquity and utility of copulas is being recognized, in view of the fact that they are the higher dimensional analogues of uniform distributions on the unit interval.
Abstract: In 1959, in response to a query of M. Frechet, A. Sklar introduced copulas. These are functions that link multivariate distributions to their one-dimensional margins. Thus, if H is an n-dimensional cumulative distribution function with one-dimensional margins F1,…,Fn, then there exists an n-dimensional copula C (which is unique when F1,…,Fn are continuous) such that H(x1,…,xn) = C (F1(x1),…,Fn (xn)). During the years 1959 — 1974, most results concerning copulas were obtained in the course of the development of the theory of probabilistic metric spaces, principally in connection with the study of families of binary operations on the space of probability distribution functions. Then it was discovered that two-dimensional copulas could be used to define nonparametric measures of dependence for pairs of random variables. In the ensuing years the copula concept was rediscovered on several occasions and these functions began to play an ever-more-important role in mathematical statistics, particularly in matters involving questions of dependence, fixed marginals and functions of random variables that are invariant under monotone transformations. Today, in view of the fact that they are the higher dimensional analogues of uniform distributions on the unit interval, and as the result of the efforts of a diverse group of scholars, the significance, ubiquity and utility of copulas is being recognized. This paper is devoted to an historical overview and rather personal account of these developments and to a description of some recent results.

196 citations


Journal ArticleDOI
01 Mar 1991
TL;DR: In this paper, a vivariate probability density function (pdf),f(x1,x2), admissible for two random variables (X1,X2), is of the form
Abstract: A vivariate probability density function (pdf),f(x1,x2), admissible for two random variables (X1,X2), is of the form $$f(x_1 x_2 ) = f_1 (x_1 )f_2 (x_2 )[1 + \rho \{ F_1 (x_1 ),F_2 (x_2 )\} ]$$ where ρ(u, v) (u=F1(x1),v=F2(x2)) is any function on the unit square that is 0-marginal and bounded below by−1 andF1(x1) andF2(x2) are cumulative distribution functions (cdf) of marginal probability density functionsf1(x1) andf2(x2). The purpose of this study is to determinef(x1,x2) for different forms of ρ(u,v). By considering the rainfall intensity and the corresponding depths as dependent random variables, observed and computed probability distributionsF1(x1),F(x1/x2),F2(x2), andF(x2/x1) are compared for various forms of ρ(u,v). Subsequently, the best form of ρ(u,v) is specified.

109 citations


Journal ArticleDOI
TL;DR: In this article, a statistical transport model for turbulent particle dispersion is formulated, which significantly improves computational efficiency in comparison to the conventional stochastic discrete-particle methodology, and the mean of each pdf is determined by Lagrangian tracking of each computational parcel, either deterministically or stochastically.
Abstract: A statistical transport model for turbulent particle dispersion is formulated having significantly improved computational efficiency in comparison to the conventional stochastic discrete-particle methodology. In the proposed model, a computational parcel representing a group of physical particles is characterized by a normal (Gaussian) probability density function (pdf) in space. The mean of each pdf is determined by Lagrangian tracking of each computational parcel, either deterministically or stochastically

71 citations


Journal ArticleDOI
TL;DR: In this article, a particular functional of the corresponding empirical probability generating function process is proposed as a measure to test the discrepancy between the evidence and the hypothesis, which is exemplified for the Poisson case only but the procedure can be extended to other discrete distributions.
Abstract: For testing the fit of a discrete distribution, use of the probability generating function and its empirical counterpart has been suggested in Koeherlakota and Kocherlakota (1986). In the present paper, a particular functional of the corresponding empirical probability generating function process is proposed as a measure to test the discrepancy between the evidence and the hypothesis. The asymptotic behavior of the empirical probability generating function when a parameter is estimated is obtained, The study is exemplified for the Poisson case only but the procedure can be extended to other discrete distributions.

52 citations


Journal ArticleDOI
TL;DR: A new algorithm based on a unidirectional search from the mode is proposed and the modal probability and modal cumulative probability, when required, are calculated by simple and rapid, yet extremely accurate, asymptotic approximations.
Abstract: The paper examines the problem of generating Poisson random variates particularly when the parameter x may vary from call to call. A new algorithm based on a unidirectional search from the mode is proposed; the modal probability and modal cumulative probability, when required, are calculated by simple and rapid, yet extremely accurate, asymptotic approximations; a squeeze feature is incorporated. Timings for a Fortran 77 implementation show that the algorithm dominates the current state‐of‐the‐art algorithms for λ

28 citations


Journal ArticleDOI
TL;DR: In this paper, the output variable cumulative distribution function, probability density function, confidence limits, standard deviation, or probability of acceptable percent deviation from the predicted value is proposed as a measure to help in choosing the appropriate model for a given rainfall runoff model use.
Abstract: Selection of rainfall‐runoff models is dependent on the modeling objectives and the simulation accuracy required. Output reliability is proposed as a measure to help in choosing the appropriate model for a given rainfall‐runoff model use. Output reliability can be expressed by the output variable cumulative distribution function, probability density function, confidence limits, standard deviation, or probability of acceptable percent deviation from the predicted value. Approximations of these reliability measures could be established by methods such as first‐order second‐moment reliability analysis. To demonstrate the approach, an example is given of estimating peak discharge for a watershed in central Illinois using both the HEC‐1 and RORB rainfall‐runoff models.

23 citations


Journal ArticleDOI
TL;DR: In this article, the authors compared the performance of the Weibull parameters alpha and beta (scale and shape parameters, respectively) estimated by weighted Blom, Bernardo, Blom and Blom's estimator with respect to the true values, as well as the estimates obtained by MLE and BE, as functions of the sample size.
Abstract: It is pointed out that the Bernard estimator is very accurate, with respect to other median or mean approximated ranks, confirming results presented by J.C. Fothergill in the above-titled paper (see ibid., vol.25, p.489-92, 1990). However, it is argued that the weighted Blom seems the most accurate estimator, among graphical ones, for normally sized samples and a reliable one for small sample sizes, although the Bernard and Filliben estimators are better than Weibull and Blom for small sample sizes. Statistical and graphical estimators are compared by plotting the percent deviations of the values of the Weibull parameters alpha and beta (scale and shape parameters, respectively) estimated by weighted Blom, Bernard, Weibull, maximum likelihood estimation (MLE), and Bain-Engelhardt (BE), with respect to the true values, as well as the estimates obtained by MLE and BE, as functions of the sample size. In replying, Fothergill states that for small sample sizes, the advantages of more sophisticated techniques do not seem warranted. >

22 citations


Journal ArticleDOI
TL;DR: The new normal test variable (NTV) plot was evaluated, in comparison with that of the probit plot and probability density functions (the generalization of histograms), for various assumed distributions.
Abstract: 1. A new graphical method was developed for the detection of deviations from the normal distribution. The approach took advantage of the similarity of graphical features of a graded dose-response relationship and a cumulative normal distribution. 2. The behaviour of the new normal test variable (NTV) plot was evaluated, in comparison with that of the probit plot and probability density functions (the generalization of histograms), for various assumed distributions. These included skewed distributions and composites of normal distributions with a variety of separations, ratios of peak sizes and widths. 3. The NTV approach generally detected deviations from the normal distribution more sensitively than the probit plot. 4. The NTV and probit plots may be able to identify biomadality by complementary approaches. 5. The characteristics of the three graphical representations were illustrated by a simulated sample from a composite of normal distributions and by an example of sparteine metabolism in 142 Cuna Amerindians.

19 citations


Journal ArticleDOI
David Pearson1
TL;DR: Probability analysis of nitrinite-reflectance data and whole-coal reflectance data involves assigning a probability based on the standard deviation of each reflectance value measured, and displaying data in the form of probability graphs as discussed by the authors.

18 citations


Journal ArticleDOI
Attila Csenki1
TL;DR: In this paper, the authors derived the probability mass function and cumulative distribution function of the joint distribution of the first m sojourn times for absorbing Markov chains for a fault-tolerant multiprocessor system.

Journal ArticleDOI
TL;DR: In this article, the reliability of a short reinforced concrete column subject to various loads is assessed using Monte Carlo simulation, and large samples are constructed for the minimum distance of this point from the boundary of the resisting domain of the section.
Abstract: The reliability of a short reinforced concrete (RC) column subject to various loads is assessed. The loads yield a two‐dimensional stress, axial force N and bending moment M. Methods for solving such kind of problems are reviewed: Monte Carlo simulation is proved to be suitable. With reference to the lower‐end section of the column, the point NS, MS in the plane N, M represents the stress: the minimum distance of this point from the boundary of the resisting domain of the section is assumed as a reliability measure. Using Monte Carlo simulation, large samples are constructed for the minimum distance. The resisting domain and the point NS, MS are random quantities; the latter depends on the applied loads, that are dead, snow, and wind load. Snow and wind loads are stochastic processes and are schematized as filtered Poisson processes. Examining the samples, the cumulative distribution function (CDF) of the distance is found by statistical methods. The value of the CDF in zero gives the probability of failu...

Journal ArticleDOI
TL;DR: In this article, the mean and variance of a nonstationary process with respect to the response buildup of a linear oscillator were derived by simulation and approximate analytical techniques, with emphasis on a comparative evaluation of available techniques.
Abstract: The mean and the variance are investigated for the maximum absolute value of a nonstationary process that represents the response buildup of a linear oscillator. The mean and variance are computed both by simulation and approximate analytical techniques, with emphasis on a comparative evaluation of available techniques and proposed procedures. The cumulative distribution function of the maximum value is represented by a form that is a function of a conditional rate of barrier crossings. This rate is computed by using a nonstationary approximation of Poisson crossings, and also by using available empirical expressions. In addition, existing expressions are used to estimate the mean and the variance of the maximum value of a nonstationary process by defining a shortened duration for an “equivalent” stationary process. Finally, the first passage problem is also posed as one governed by classical state‐space moment equations, and a modified Gaussian closure technique is used to obtain an approximate solution.

Journal ArticleDOI
TL;DR: This paper presents empirically determined guidelines for specifying the number of features appropriate for multivariate classification studies for given sample sizes, based on the sample size data of 34 key papers on clinical body surface potential mapping (BSPM).

Journal ArticleDOI
TL;DR: In this article, the ground input motion is described by its power spectral density (PSD) function and the PSD functions of the design parameters are obtained through transfer functions, using the cumulative probability function of the maximum response proposed by Vanmarcke, design values are derived according to an acceptable probability of exceedance.

Posted Content
TL;DR: In this paper, the expected value function of a stochastic integer linear programming problem with simple recourse is reduced to a separable function, which allows us to study the one-dimensional functions g(x) = E ξ − x + and h(h) = ξ− ξ + instead.
Abstract: We consider the expected value function of a stochastic integer linear programming problem with simple recourse. By making appropriate simplifications we reduce it to a separable function which allows us to study the one-dimensional functions g(x) = E ξ − x + and h(x) = E x− ξ + instead. We derive formulas for the functions g and h in terms of the cumulative distribution function F of the random variable ξ. We find conditions for finiteness, (Lipschitz-)continuity, differentiability and convexity of g and h. Since in general g and h are not convex functions, much attention is paid to finding their convex hulls, g∗∗ and h∗∗ respectively. We prove that g∗∗(x) = E(ζ − x) and h∗∗(x) = E(x− ψ), where the random variables ζ and ψ have some cumulative distribution function G and H . Moreover, if F belongs to a certain class, we find explicit formulas for g∗∗, G, h∗∗ and H . As examples we derive explicit formulas for the exponential and uniform distribution. ∗Department of Econometrics, University of Groningen, The Netherlands. †Institute for Actuarial Sciences and Econometrics, University of Amsterdam, The Netherlands. ‡Department of Econometrics, University of Groningen, The Netherlands. Supported by the Landelijk Netwerk Mathematische Besliskunde (LNMB).

Journal ArticleDOI
TL;DR: In this article, a mixed-integer chance-constrained optimization model for structural design is presented, which incorporates the probabilistic nature of the loadings experienced throughout the life of the structure and can be used to investigate the tradeoffs between cost and the probability of failure.
Abstract: A mixed‐integer chance‐constrained optimization model is presented for structural design. The model incorporates the probabilistic nature of the loadings experienced throughout the life of the structure and can be used to investigate the trade‐offs between cost and the probability of failure. Design constraints are originally expressed as probability statements, in which the design parameters are considered to be random variables with established probability distributions. The chance constraints impose lower bounds on the likelihood or chance that specified failure criteria are exceeded for each structural member. Using the cumulative distribution function for each design parameter, the chance constraints may be expressed as deterministic equivalent constraints. Possible design solutions are limited to combinations of available standard sections using binary decision variables. An example is presented using a one‐story, one‐bay steel frame, and the results are compared with those obtained through conventi...

Proceedings ArticleDOI
E. Frangoulis1
14 Apr 1991
TL;DR: The author reports on the application of vector quantization (VQ) of the continuous HMM (hidden Markov model) and noise distributions for isolated word recognition in a noisy environment, which resulted in much improved recognition performance.
Abstract: The author reports on the application of vector quantization (VQ) of the continuous HMM (hidden Markov model) and noise distributions for isolated word recognition in a noisy environment. Separate codebooks and HMMs for noise-free speech and noise are constructed. An input frame is associated with the speech or noise codebook according to the minimum distortion criterion, and the probability density function of the combined speech-noise signal is obtained from the vector-quantized PDFs of speech and noise, and their respective cumulative distribution functions. The methodology was applied to an isolated digital recognition task in an office environment, in a car environment, and with additive white and impulsive noise. The VQ approach resulted in much improved recognition performance. >

Proceedings ArticleDOI
04 Nov 1991
TL;DR: It is shown that the generalized Pareto distribution can be used to approximate the extreme tail of the density function and it is demonstrated that accurate results can be obtained with orders of magnitude fewer samples than are required by Monte Carlo simulation.
Abstract: To set the radar threshold for small false alarm probabilities, it is necessary to know the tail of the probability density function for the test statistic under the no target assumption. It is shown that the generalized Pareto distribution (GPD) can be used to approximate the extreme tail of the density function. As a result, fixing the threshold is equivalent to estimating the two parameters of the GPD. For a variety of probability density functions it is demonstrated that accurate results can be obtained with orders of magnitude fewer samples than are required by Monte Carlo simulation. The thresholds required for very low false alarm probabilities were obtained with a good deal of accuracy. >

Journal ArticleDOI
TL;DR: In this article, the authors examined the non-null asymptotic distributions of several functions of one-tailed sample probability values (from t tests) and derived additional approximations, based on variance-stabilizing transformations of In(p) and z(p).
Abstract: The observed probability p is the social scientist's primary toolfor evaluating the outcomes of statistical hypothesis tests. Functions of ps are used in tests of "combined significance," meta-analytic summaries based on sample probability values. This study examines the nonnull asymptotic distributions of several functions of one-tailed sample probability values (from t tests). Normal approximations were based on the asymptotic distributions of z(p), the standard normal deviate associated with the one-sided p value; of In(p), the natural logarithm of the probability value; and of several modifications of In(p). Two additional approximations, based on variance-stabilizing transformations of In(p) and z(p), were derived. Approximate cumulative distribution functions (cdfs) were compared to the computed exact cdf of the p associated with the one-sample t test. Approximations to the distribution of z(p) appeared quite accurate even for very small samples, while other approximations were inaccurate unless sample sizes or effect sizes were very large. Approximations based on variance-stabilizing transformations were not much more accurate than those based on In(p) and z(p). Generalizations of the results are discussed, and implications for use of the approximations conclude the article.

01 Jan 1991
TL;DR: In this article, the normalized incomplete gamma functions P(a, x) and Q(a and x) are inverted for large values of the parameter a. The inversion problem is started by inverting this error function term.
Abstract: The normalized incomplete gamma functions P(a, x) and Q(a, x) are inverted for large values of the parameter a. That is, x-solutions of the equations P(a, x) = p, Q(a, x) = q, p E [O, 1], q = I -p, are considered, especially for large values of a. The approximations are obtained by using uniform asymptotic expansions of the incomplete gamma functions in which an error function is the dominant term. The inversion problem is started by inverting this error function term. Numerical results indicate that for obtaining an accuracy of four correct digits, the method can already be used for a = 2, although a is a large parameter. It is indicated that the method can be applied to other cumulative distribution functions.

Journal ArticleDOI
TL;DR: In this article, the authors constructed a lower bound for the statistical fluctuations of the survival probability of particles moving diffusively in a medium with random traps and showed that in the asymptotic regime the total survival probability is a non-self averaging quantity, in that the fluctuations dominate the mean value.
Abstract: Through a very simple treatment we have constructed a lower bound for the statistical fluctuations of the survival probability of particles moving diffusively in a medium with random traps. We show that in the asymptotic regime the total survival probability is a non-self averaging quantity, in that the fluctuations dominate the mean value.

Journal ArticleDOI
TL;DR: In this paper, the authors examined a number of unconventional alternatives derived from a general class of multivariate distribution systems and showed that these distribution systems are flexible and closely emulate multivariate normality.
Abstract: Though multivariate normality is the warranted distributional form in linear models, its usefulness and simplicity is drastically curtailed when one has to evaluate the multivariate normal cumulative. In this paper we examine a number of unconventional alternatives derived from a general class of multivariate distribution systems. We will show that these distribution systems are flexible and closely emulate multivariate normality. Our study focuses primarily on multivariate models that include censored or qualitative dependent variables. However, these findings are extendable to any other application that requires an evaluation of the multivariate normal cumulative distribution.

Journal ArticleDOI
Douglas W. Cooper1
TL;DR: The characteristics of the particle size distributions chosen are important for two specification documents currently under revision: (1) FED-STD-209D, concerning air-cleanliness in manufacturing, which uses cumulative particle size distribution that are linear when plotted on log-log axes; these are power law distributions.
Abstract: Particle size strongly influences particle behavior. To summarize the distribution of particle sizes, a distribution function can be used. The characteristics of the particle size distributions chosen are important for two specification documents currently under revision: (1) FED-STD-209D, concerning air-cleanliness in manufacturing, which uses cumulative particle size distributions that are linear when plotted on log-log axes; these are power law distributions. (2) MIL-STD-1246B, "Product Cleanliness Levels and Contamination Control Programs," primarily concerning surface cleanliness, which uses cumulative particle size distributions that are linear when plotted as the logarithm of the cumulative distribution versus the square of the logarithm of the particle size, log2x, A third distribution, the lognormal, is commonly found in aerosol science, especially where there is a single particle source. The distributions are compared and discussed. The FED-STD-209D power law distribution can approximate a logno...


Book ChapterDOI
01 Jan 1991


Journal ArticleDOI
TL;DR: In this article, a chance constrained optimization model is presented for determining dynamic control parameters for active members within structural designs, where the optimal feedback control parameters are determined by minimizing the levels of each control parameter conditioned on the probability that displacements at any time do not exceed pre-specified maxima.

Journal ArticleDOI
TL;DR: For the probability of work capacity of a product, this paper constructed confidence intervals in various cases of normal distribution when the work capacity is a function of two comparable random variables, and showed that the confidence intervals are tight.
Abstract: For the probability of work capacity of a product we construct confidence intervals in various cases of normal distribution when the work capacity is a function of two comparable random variables.

Proceedings ArticleDOI
29 Jan 1991
TL;DR: Design criteria for the wall thickness of space structures to protect them from the expected collision impact of meteoroids during their mission periods in space are presented.
Abstract: Design criteria for the wall thickness of space structures to protect them from the expected collision impact of meteoroids during their mission periods in space are presented. The maximum expected damage by a meteoroid on a unit surface area is stochastically evaluated by applying an analytical method based on a Poisson process analysis. First, the cumulative distribution function (CDF) and mean arrival rate of meteoroids are determined from the relation between the flux number and mass of meteoroids given by NASA. Next, the probability of the maximum mass arriving at a unit surface area during a certain period of time is calculated using the Poisson process and a function of the CDF and mean arrival rate of meteoroids. To obtain the CDF for the penetration depth from a meteoroid impact, the cumulative distribution function of mass has to be replaced by that of the penetration depth. The expected penetration depth for a space structure is provided as a function of the total surface area of the structure. >