scispace - formally typeset
Search or ask a question

Showing papers on "Cumulative distribution function published in 1980"


Journal ArticleDOI
TL;DR: In this paper, the authors developed a generalized PDF called double bounded probability density function (DB-PDF) for parameter estimation and simulation of random variables, which can be applied to practical problems of parameter estimation.

799 citations


Journal ArticleDOI
TL;DR: Both search models (random and systematic) are extended first to the case of multiple occurrences of a single fault type within a search field and second to the cases of multiple fault types.
Abstract: Performance in a visual search task is usually measured by the cumulative probability of locating a target, F(t), in a given time (t). Two extreme F(t) against (t) relationships have been postulated, one assuming that search is random, and the other assuming that search is systematic. However, these relationships have only been available for the situation in which each search field contains a single occurrence of a single type of target. This paper extends both search models (random and systematic) first to the case of multiple occurrences of a single fault type within a search field and second to the case of multiple fault types. For systematic search, these two cases can be combined to predict the effects of multiple occurrences of multiple fault types. The general F(t) relationships are given in each case and illustrated with a worked example.

93 citations


Journal Article
TL;DR: Molecular descriptions of several common sample designs used to estimate the distribution parameters are developed here because failure to describe the sample space adequately can lead to erroneous genetic analyses.
Abstract: A necessary item of information in many genetic analysis of complex disorders with late onset is the cumulative probability of onset by a given age. The effect of sample design upon the estimation of age-of-onset probability distribution parameters is discussed. Mathematical descriptions of several common sample designs used to estimate the distribution parameters are developed here. Failure to describe the sample space adequately can lead to erroneous genetic analyses because the cumulative probability of onset is incorrectly estimated. In genetic counseling, the errors would usually result in an underestimate of the true risk.

63 citations


Journal ArticleDOI
TL;DR: In this article, an expression for the first-order probability density function of the laser speckle phase is analytically derived under the assumption that the field obeys a non-circular, complex Gaussian, random process with a certain correlation between the real and imaginary parts of its complex amplitude.
Abstract: An expression for the first-order probability density function of the laser speckle phase is analytically derived under the assumption that the speckle field obeys a non-circular, complex Gaussian, random process with a certain correlation between the real and imaginary parts of its complex amplitude. The probability density function of the speckle phase is actually evaluated for various cases and shown three-dimensionally as a function of the standard deviation of random object phase variations. The effect of random object phase variations on the probability density function is also investigated in detail.

40 citations


Journal ArticleDOI
TL;DR: In this paper, a computer technique is described for contouring and precisely locating the modes of vector distributions that may be hay skewed, where unit vectors from a given data set are treated not as discrete points, but as identical Fisherian probability density functions defined (at an angle 0 from the unit vector) by: p = exp [sk(cos 0 - l)], where k is the estimate of the Fisherian concentration parameter, and s is an arbitrarily assigned smoothmg parameter.
Abstract: Summary. Advantages of using the mode in analysis of palaeomagnetic vectors are discussed, and a computer technique is described for contouring and precisely locating the modes of vector distributions that may be hay skewed. In contrast to conventional determinations of the mode, unit vectors from a given data set are treated not as discrete points, but as identical Fisherian probability density functions defined (at an angle 0 from the unit vector) by: p = exp [sk(cos 0 - l)], where k is the estimate of the Fisherian concentration parameter, and s is an arbitrarily assigned ‘smoothmg parameter’. A grid, representing the cumulative probability distribution of the total sample of vectors, is contoured to provide a graphical display of the distribution around the most probable value, the mode. By repeatedly contouring the same sample of vectors with successively larger values of s, and by treating the mode as a vector with length given by the total probability value at the mode, ‘progressive modal diagrams’ can be constructed, to aid in determining the stable position of the mode of skewed distributions. In addition, a new statistic, ‘095) is suggested as an error estimator for the mode. The statistic &,5 is derived from the largest subset of the total sample that has a mean identical with the mode of the total sample; this statistic is defined as the Fisherian half-angle of the cone of 95 per cent confidence for the mean of this subset.

28 citations


Journal ArticleDOI
TL;DR: In this paper, the complexity involved in statistically estimating the reliability of computer components from field data on systems having different operating times is discussed, and the renewal method for estimating component reliability and the corresponding assumptions are explained with an application to some actual field data.

17 citations


DOI
29 Jan 1980
TL;DR: In this paper, the authors present a method to statistically estimate the severest sea state (significant wave height) from the observed data, which is based on the cumulative distribution of the significant wave height as a combination of an exponential and power of the wave height.
Abstract: This paper presents a method to statistically estimate the severest sea state (significant wave height) from the observed data. For the estimation of extreme significant wave height, a precise representation of the data by a certain probability function is highly desirable. Since we do not have any specific technique to meet this requirement, this situation seriously affects the reliability of the current method of predicting the severest sea condition. The author's method is to express asymptotically the cumulative distribution of the significant wave height as a combination of an exponential and power of the significant wave height. The parameters involved are determined numerically by a nonlinear minimization procedure. The method is applied to available significant wave height data measured in the North Sea, the Canadian coast, and the U.S. coast. The results of the analysis show that the data are well represented by the proposed method over the entire range of the cumulative distribution.

10 citations


Journal ArticleDOI
TL;DR: The results of this study indicate that simulation is more efficient than bounding distributions, and that the relative efficiency of simulation is further enhanced as networks become larger.
Abstract: The method of bounding distributions and Monte Carlo simulation are compared for efficiency in estimating the mean and cumulative distribution of network completion time. Efficiency is measured in terms of computation time and the precision of the estimates. The results of this study indicate that simulation is more efficient than bounding distributions, and that the relative efficiency of simulation is further enhanced as networks become larger.

8 citations



Journal ArticleDOI
TL;DR: In this article, a procedure for comparing two regression functions over a finite region of values of the independent variables is presented, which requires only a univariate t table, and a real application is discussed for the case with two independent variables.
Abstract: SUMMARY A procedure, which requires only a univariate t table, is presented for comparing two regression functions over a finite region of values of the independent variables. A real application is discussed for the case with two independent variables. Also, a simple nonparametric test is given for testing the equality of two populations when the observations (X1, . . ., Xk, Y) are multivariate and the alternative of interest is that the conditional cumulative distribution function of Y, given X1, . . ., Xk for one population, is greater than or equal to that for the other population for every value of Xl, . * * Xk

6 citations


Journal ArticleDOI
TL;DR: In this paper, the probability density function and the cumulative distribution function of the values of the distances from the probe's center where the radiation was absorbed are given, compared with the results obtained by the Monte-Carlo simulation technique.

Book ChapterDOI
01 Jan 1980
TL;DR: This chapter discusses the simulation of a multi-risk collective model using a computer program written primarily in the FORTRAN language that was executed on an IBM 360 model 50 at the Ball State University Computer Center.
Abstract: Publisher Summary This chapter discusses the simulation of a multi-risk collective model. The simulation was carried out using a computer program written primarily in the FORTRAN language that was executed on an IBM 360 model 50 at the Ball State University Computer Center. The method for generating random deviates of a given distribution was accomplished by the procedure of Monte Carlo methods. The procedure involves drawing an ordinary random number and then using the cumulative distribution function of the given distribution to generate the desired random deviate. The computer program consists of a main routine that controls the logic. The separate subroutines are written for the generation of the random variables of the distributions. The routines for the negative exponential and the Pareto random deviates are written using the explicit inverses of the cumulative distribution functions.

Patent
22 Feb 1980
TL;DR: In this paper, the authors presented a method to obtain a probability curve by which probability L(P) that the number of defective samples is less than (c) by obtaining product (pn) of defective rate (p) and sample number (n) using the graph.
Abstract: PURPOSE:To obtain a probability curve by which probability L(P) that the number of defective samples is less than (c) by obtaining product (pn) of defective rate (p) and sample number (n) using the graph. CONSTITUTION:On the calculation graph, sample numbers (n) and defective rate (p) are assigned to scales A, B and C respectively, scales A, B and C are so divided on cumulative probability paper that product (pn) of (n) and (p) agrees with a point where a straight line between random values (n) and (p) cross scale B, and the division is made to agree with that of (pn) for obtaining a cumulative probability curve. When (n) is 60 and (p) is 8X10 , the straight line between the both cross scale B with a value of 0.48 (pn) and when (c) is zero, L(P) is 0.6.

Proceedings ArticleDOI
G. DeMuth1
01 Apr 1980
TL;DR: A scaling approach is described that is based upon the cumulative distribution function of the quantized signal that is used in estimating the position of the standard deviation of the signal plus noise in the quantization aperture.
Abstract: In signal processing, multidimensional FFTs and circular convolution filters implemented via multiplication in the frequency domain are used frequently. To avoid amplitude modulation of the signal, a scaling based upon criteria involving multiple FFTs must be employed; a direct application of block floating point is inadequate. A scaling approach is described that is based upon the cumulative distribution function of the quantized signal. The characteristics of the cumulative distribution function are used in estimating the position of the standard deviation of the signal plus noise in the quantization aperture.

Journal ArticleDOI
C.T. Isley1
TL;DR: The cumulative probability for a circular error probability (CEP) radius can be developed as a power series which converges efficiently for cases of practical interest and is suitable for numerical solution with simple iteration methods as discussed by the authors.
Abstract: The cumulative probability for a circular error probability (CEP) radius can be developed as a power series which converges efficiently for cases of practical interest and is suitable for numerical solution with simple iteration methods. The procedure can be applied with equal facility to circles of arbitrary probability level.

01 Jan 1980
TL;DR: Geuze et al. as discussed by the authors presented a comparison of some simple estimation processes for Gross Margin CUMULATIVE DISTRIBUTION (GMC) fund in the United States.
Abstract: The 1970*8 brought major changes to the risk environment in which . agricultural managers function. Increasing international trade, government regulation, market oriented commodity programs, inflation and high capital costs are some of the factors contributing to increasing risk in agriculture. In order to cope with this changing environment, farmers will need to improve their ability to assess risk and return from competing decisions. Disciplines Agribusiness | Growth and Development | Industrial Organization | International Business This report is available at Iowa State University Digital Repository: http://lib.dr.iastate.edu/econ_las_staffpapers/80 A COMPARISON OF SOME SIMPLE ESTIMATION PROCEDURES FOR GROSS MARGIN CUMULATIVE DISTRIBUTION FUNCTIONS Rlen Geuze Departnnent of Agricultural Economics Agricultural University of Wageningen, Netherlands

Journal ArticleDOI
J.L. Wirth1
TL;DR: An ensemble of pseudo-random numbers is added to the density sample values of the image and the result thresholded if the ensemble is generated to have the appropriate cumulative probability distribution, the image is reproduced with a desired reproduction curve.