scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 1975"


Journal ArticleDOI
TL;DR: Applications of gradient estimation to pattern recognition are presented using clustering and intrinsic dimensionality problems, with the ultimate goal of providing further understanding of these problems in terms of density gradients.
Abstract: Nonparametric density gradient estimation using a generalized kernel approach is investigated. Conditions on the kernel functions are derived to guarantee asymptotic unbiasedness, consistency, and uniform consistency of the estimates. The results are generalized to obtain a simple mcan-shift estimate that can be extended in a k -nearest-neighbor approach. Applications of gradient estimation to pattern recognition are presented using clustering and intrinsic dimensionality problems, with the ultimate goal of providing further understanding of these problems in terms of density gradients.

3,125 citations


Book ChapterDOI
01 Jan 1975
TL;DR: In this article, the first-order statistics of the complex amplitude, intensity and phase of speckle are derived for a free-space propagation geometry and for an imaging geometry.
Abstract: Since speckle plays an important role in many physical phenomena, it is essential to fully understand its statistical properties. Starting from the basic idea of a random walk in the complex plane, we derive the first-order statistics of the complex amplitude, intensity and phase of speckle. Sums of speckle patterns are also considered, the addition being either on an amplitude or on an intensity basis, with partially polarized speckle being a special case. Next we consider the sum of a speckle pattern and a coherent background, deriving the first-order probability density functions of intensity and phase. Attention is then turned to second-order statistics. The autocorrelation function and power spectral density are derived, both for a free-space propagation geometry and for an imaging geometry. In some cases the recorded speckle pattern may be spatially integrated or blurred, and accordingly consideration is given to the statistics of such patterns. Finally, the relationship between detailed surface structure and the resulting speckle pattern is explored, with emphasis on the effects of the surface autocorrelation function and the effects of finite surface roughness.

1,217 citations


Journal ArticleDOI
TL;DR: In this paper, the effects of stochastic parameter distributions on predicted hydraulic heads are analyzed with the aid of a set of Monte Carlo solutions to the pertinent boundary value problems, and the results show that the standard deviations of the input hydrogeologic parameters, particularly σy and σc, are important index properties; changes in their values lead to different responses for even when the means μy, μc, and μn are fixed.
Abstract: The most realistic representation of a naturally occurring porous medium is a stochastic set of macroscopic elements in which the values of the three basic hydrogeologic parameters (hydraulic conductivity K, compressibility α, and porosity n) are defined by frequency distributions. A homogeneous formation under this representation is one in which the frequency distributions do not change through space. All soils and geologic formations, even the ones that are homogeneous, show random variations in the values of the hydrogeological parameters through space; that is, they are nonuniform, and a measure of the nonuniformity is provided by the standard deviation of the frequency distributions. If K and α are log normally distributed and n is normally distributed, and if we define Y = log K and C = log α, then the parameters Y, C, and n can be generated from a multivariate normal density function with means μy, μc, and μn, standard deviations σy, σc, and σn, and correlation coefficients ρyc, ρyn, and ρcn The analysis of groundwater flow in nonuniform media requires a stochastic-conceptual approach in which the effects of stochastic parameter distributions on predicted hydraulic heads are analyzed with the aid of a set of Monte Carlo solutions to the pertinent boundary value problems. In this study, two one-dimensional saturated flow problems are analyzed: steady state flow between two specified heads and transient consolidation of a clay layer. The primary output is the statistical distribution of hydraulic head ϕ, through space and time, as indicated by the mean values and their standard deviations Sϕ¯(x, t) Results show that the standard deviations of the input hydrogeologic parameters, particularly σy and σc, are important index properties; changes in their values lead to different responses for even when the means μy, μc, and μn are fixed. The degree of uncertainty associated with hydraulic head predictions increases as the degree of nonuniformity of the porous medium increases. For large values of σy and σc it becomes virtually impossible to obtain meaningful hydraulic head predictions. For transient flow the output distribution of hydraulic head values is almost never normal; in some cases it approaches a uniform distribution. The results of this study throw into question the validity of the hidden assumption that underlies all deterministic groundwater modeling; namely, that it is possible to select a single value for each flow parameter in a homogeneous but nonuniform medium that is somehow representative and hence define an ‘equivalent’ uniform porous medium. For transient flow there may be no way to define an equivalent medium. The fact that nine index parameters rather than three are required to describe a nonuniform geologic formation, the large uncertainties in predicted hydraulic heads for relatively simple flow problems in nonuniform soils, and the contention that there may be no simple way to define an equivalent uniform porous medium all have important implications in the development of groundwater flow theory and in its most fundamental applications.

990 citations


Journal ArticleDOI
TL;DR: In this paper, the probability density function (PDF) of the fluctuations in void fraction may be used as an objective and quantitative flow pattern discriminator for the three dominant patterns of bubbly, slug, and annular flow.

349 citations


Journal ArticleDOI
TL;DR: In this article, estimates of multidimensional density functions based on a bounded and bandlimited weight function are considered and the asymptotic behavior of quadratic functions of density function estimates useful in setting up a test of goodness of fit of the density function is determined.
Abstract: This paper considers estimates of multidimensional density functions based on a bounded and bandlimited weight function. The asymptotic behavior of quadratic functions of density function estimates useful in setting up a test of goodness of fit of the density function is determined. A test of independent is also given. The methods use a Poissonization of sample size. The estimates considered are appropriate if interested in estimating density functions or determining local deviations from a given density function.

253 citations


Journal ArticleDOI
Abstract: The equation describing the evolution of the probability density function of the temperature field in a turbulent axisymmetric heated jet is presented. A closure problem is present and some possible ways of attacking it are suggested. A closure at the first‐order level is then tentatively tried and similarity arguments are exploited. A hyperbolic first‐order variable‐coefficient quasi‐self‐preserving partial differential equation is obtained. The constants appearing in those coefficients are evaluated from available experiments. Some questions are raised on the uncertainty of the computed constants due to the experimental scattering of the velocity‐temperature correlation at the centerline. The probability density function is obtained analytically as a function of downstream location along the centerline if it is prescribed at a reference centerline position. In particular, the probability density function is taken as Gaussian at ten diameters downstream. The computed mean and variance are compared with existing experiments and display a reasonably good agreement. Values for the skewness and flatness factor tend to indicate that deviations from Gaussianity along the centerline are very small.

196 citations


Journal ArticleDOI
TL;DR: In this paper, a unified microscopic statistical theory of preequilibrium and equilibrium processes of the compound nucleus, valid for mass numbers A ⪆ 40, light incident projectiles (A ′4), and for excitation energies a few MeV above neutron threshold or larger, is presented.

193 citations


Journal ArticleDOI
TL;DR: In this article, a mathematical and statistical study which shows the flexibility and limitations of the log Pearson type 3 distribution is carried out, and the results indicate the various forms of density function and relationships that exist between distribution parameters and moments, coefficient of variation, and coefficient of skewness.
Abstract: A mathematical and statistical study which shows the flexibility and limitations of the log Pearson type 3 distribution is carried out. The results obtained indicate the various forms of density function and relationships that exist between distribution parameters and moments, coefficient of variation, and coefficient of skewness. The method of fitting proposed by the Hydrology Committee of the Water Resources Council is compared with a new method which instead of using moments of the logarithmic values retains the moments of the original data. In the case of events with a large return period the results obtained by the two methods may deviate appreciably, and because in the proposed method the same weight is given to each of the observed values, it will result in a better fit of the data.

144 citations


Journal ArticleDOI
01 Sep 1975
TL;DR: This correspondence presents a procedure for generating correlated random variables with specified non-Gaussian probability distribution functions (pdf's) such as might be required for Monte Carlo simulation studies.
Abstract: This correspondence presents a procedure for generating correlated random variables with specified non-Gaussian probability distribution functions (pdf's) such as might be required for Monte Carlo simulation studies. Specifically, a method is presented for generating an arbitrary number of pseudorandom numbers each with a prescribed probability distribution and with a prescribed correlation coefficient matrix for the collection of random numbers. Collections of typical numbers generated with the method are evaluated with chi-squared tests for the distribution functions and with confidence intervals for the correlation coefficients derived from maximum likelihood estimates. In all cases tested the generated numbers passed the tests.

139 citations


Book ChapterDOI
01 Jan 1975
TL;DR: The form is one of a linear combination of beta densities with random coefficients, and the order of convergence of the Bernstein polynomial estimate is comparable to that of other methods.
Abstract: Publisher Summary This chapter presents a Bernstein polynomial approach to density function estimation and describes the Bernstein polynomial estimate of f(x). The form is one of a linear combination of beta densities with random coefficients. The order of convergence of the Bernstein polynomial estimate is comparable to that of other methods. The estimate chapter has a significant practical advantage as it is well adapted to computation.

116 citations


Journal ArticleDOI
TL;DR: In this paper, a nonparametric estimator of a density at a particular quantile is based on sample quantiles and the optimal choice of these quantiles is considered and a method of removing the bias is suggested.
Abstract: A non-parametric estimator of a density at a particular quantile is based on sample quantiles. The optimal (to minimize M.S.E.) choice of these quantiles is considered and a method of removing the bias is suggested.

Journal ArticleDOI
TL;DR: The rate at which the mean square error decreases as sample size increases is evaluated for general $L 1$ kernel estimates and for the Fourier integral estimate for a probability density function as discussed by the authors.
Abstract: The rate at which the mean square error decreases as sample size increases is evaluated for general $L^1$ kernel estimates and for the Fourier integral estimate for a probability density function. The estimates are then compared on the basis of these rates.

Journal ArticleDOI
TL;DR: In this article, the authors present an analytical study of two-stage production systems with variable operation times and provision for intermediate storage. And they show that for moderate coefficients of variation the production rates for these two distributions differ only marginally.
Abstract: This paper presents an analytical study of several aspects of two-stage production systems with variable operation times and provision for intermediate storage. For the exponential service times assumed at one of the stages, the set of simultaneous equations satisfied by the steady-state probabilities are shown to involve the Laplace transform of the density function at the other stage and its various order derivatives. An analysis of this set of equations leads to a recursive solution for the mean production rate of a system with any number of storages. The realistic cases of Erlang and normal density functions are worked out in detail. It turns out that for moderate coefficients of variation the production rates for these two distributions differ only marginally. That this is not generally true is illustrated by considering a uniform distribution, for which the results are significantly different. The problem of balancing the production system is discussed at some length. It is shown that the p...

Journal ArticleDOI
01 Feb 1975
TL;DR: The probability distribution of the optimum of an integer linear program is discussed in which the elements of the right-hand-side (RHS) are distributed independently and the assumptions of the asymptotic algorithm of Gomory are supposed to hold for each realization of the RHS.
Abstract: The probability distribution of the optimum (Z) of an integer linear program is discussed in which the elements of the right-hand-side (RHS) are distributed independently. The assumptions of the asymptotic algorithm ofGomory are supposed to hold for each realization of the RHS. This algorithm serves also as the theoretic framework of the present communication. Bounds and approximations for the probability function of Z are derived demanding different levels of numerical efforts. The normal distribution is a satisfactory approximation which is asymptotically correct if the elements of the RHS are uniformly distributed within bounds satisfying some requirements and the number of inequalities approaches infinity.

Journal ArticleDOI
TL;DR: In this article, a probabilistic model of linkage mechanisms considering tolerances on the link lengths and clearances in the hinges is made, and the synthesis procedure considering both the linear and second order terms is shown.

Journal ArticleDOI
TL;DR: In this article, a new method is presented which describes the behavior of an (N + 1) th-order tacking system in which the nonlinearity is either periodic [phase-locked loop (PLL) type] or a nonperiodic [delay-locked loops (DLL] type].
Abstract: A new method is presented which describes the behavior of an (N + 1) th-order tacking system in which the nonlinearity is either periodic [phase-locked loop (PLL) type] or a nonperiodic [delay-locked loop (DLL) type]. The cycle slipping of such systems is modeled by means of renewal Markov processes. A fundamental relation between the probability density function (pdf) of the single process and the renewal process is derived which holds in the transient as well as in the stationary state. Based on this relation it is shown that the stationary pdf, the mean time between two cycle slips, and the average number of cycles to the right (left) can be obtained by solving a single Fokker-Planck equation of the renewal process. The method is applied to the special case of a PLL and compared with the so-called periodic-extension (PE) approach. It is shown that the pdf obtained via the renewal-process approach can be reduced to agree with the PE solution for the first-order loop in the steady state only. The reasoning and its implications are discussed. In fact, it is shown that the approach based upon renewal-process theory yields more information about the system's behavior than does the PE solution.

Journal ArticleDOI
TL;DR: In this paper, high-order moments of turbulent velocity gradients and their behavior with Reynolds number were measured in the nearly isotropic turbulent field generated by a square-mesh grid and in a turbulent boundary layer along a flat plate with zero pressure gradient.
Abstract: Higher-order moments of turbulent velocity gradients and their behavior with Reynolds number were measured in the nearly isotropic turbulent field generated by a square-mesh grid and in a turbulent boundary layer along a flat plate with zero pressure gradient. Hot-wire anemometry and instrumentation combining analog and digital methods were used to measure moments up to the fourteenth order. Measurements of such high-order moments required that particular attention be given to their validity. Involved herein was the evaluation of such effects as nonlinearity, averaging intervals, and the adequacy of the statistics for the tails of the probability density distributions. The results obtained are compared with those of other investigators for a variety of flow configurations in the laboratory as well as in the atmosphere. The concept of the intermittency of the small-scale structure and the theoretical approach involving lognormality of the probability density distribution of the dissipation rate are evaluated.

Journal ArticleDOI
TL;DR: In this paper, the common moments are expressed in terms of successive integrals of a probability density function to allow a systematic comparison of the two methods, and the results of both methods are compared mathematically for a complete comparison.
Abstract: Since the appearance in 1969 of Kadar and Russell's paper [1] and in 1970 of Whitmore's paper [4] extending stochastic dominance to the second and third degrees, a considerable interest has developed in stochastic dominance methods as an alternative to moment methods in investment ranking models. The particular attraction of stochastic dominance is that its results are consistent with the expected utility hypothesis without depending on a particular mathematical form of utility function or on a specific type of distribution of investment returns. Although both stochastic dominance ranking models and moment ranking models are based on probability distributions of investment returns, it has been difficult to relate the two types of models mathematically for a complete comparison of results. In this paper the common moments are expressed in terms of successive integrals of a probability density function to allow a systematic comparison of the two methods.

Book ChapterDOI
01 Jan 1975
TL;DR: In this paper, the authors use the concept of probability density functions (PDFs) for the velocity and the scalar properties of a turbulent flow field, and this variation can be represented using a PDF.
Abstract: There is current interest in being able to account for the presence of fluctuations when calculating velocity, temperature, and composition distributions in turbulent flow fields with the effect of energetic chemical reactions included. Examples of this type of flow occur in combustors and afterburning rocket plumes. There are two major approaches to solving this problem. The first is evaluation of the higher moments of the equations of motion, the energy equation, and the species continuity equations.1 This approach requires information which is currently not available experimentally. Methods using the concept of probability density functions (PDF’s) for the velocity and the scalar properties of the flow field have been described in Refs. 2, 3, and 4. In a turbulent flow each property varies with time at any given point in the flow field, and this variation can be represented using a PDF. The integral of a PDF between two values of a parameter in the field represents the fraction of time the parameter has a value between these two limits. Some attempt has been made to calculate PDF’s a priori2, 3 however, these methods require some rather drastic assumptions and are limited to rather simple cases. An alternate approach is to use PDF’s which are derived from available experimental data.4

Journal ArticleDOI
TL;DR: In this paper, a method for exact calculation of the probability density function of the sum of N correlated speckle patterns was developed, which is only necessary to first find the eigenvalues of an N × N coherence matrix.

Journal ArticleDOI
R. Bartnikas1
TL;DR: In this article, a multichannel pulse-height analyzer was used to carry out statistical analyses of the corona pulse density patterns of EPR and XLPE insulated distribution cables, and the practical implications of the test results were discussed in the light of present corona level specifications, cable insulation geometry and type as well as insulation system life.
Abstract: A multichannel pulse-height analyzer has been used to carry out statistical analyses of the corona pulse density patterns of EPR and XLPE insulated distribution cables. The corona pulse probability density functions obtained on these cables were studied as a function of applied voltage and time. The practical implications of the test results are discussed in the light of present corona level specifications, cable insulation geometry and type as well as insulation system life.

Journal ArticleDOI
TL;DR: The Stochastic Dominance (SD) approach as discussed by the authors does not depend on specific assumptions about the investor's utility function and has been shown to be theoretically superior to the two-moment methods.
Abstract: Preference orderings of uncertain prospects have progressed from the two-moment EV model first developed by Markowitz [1952] to the more general efficiency analysis that is based on the entire probability function. This general efficiency approach, referred to as the Stochastic Dominance (SD) approach, does not depend on specific assumptions about the investor's utility function and has been shown to be theoretically superior to the “moment methods” [1].

Journal ArticleDOI
TL;DR: In this article, a comparison between different methods to evaluate numerically the integral leading from the characteristic function to the corresponding probability distribution function is presented, where the integral is defined numerically.
Abstract: This paper contains a comparison between different methods to evaluate numerically the integral leading from the characteristic function to the corresponding probability distribution function.

Journal ArticleDOI
TL;DR: In this paper, the authors established conditions under which such a histogram density estimator is uniformly consistent almost surely, when the density has a unique mode and obtained a strongly consistent estimator of the mode similar to that of Venter (1967).
Abstract: Let be an ordered sample of n independent observations, X1,X2,…,Xn of a random variable X with distribution function F(x) and density f(x) continuous on its support set . As a nonparametric histogram estimator of the density function f(x), consider an estimator fn (x) of the form: where {An (x)} is a suitably chosen sequence of non-negative integer-valued indexing random variables; and {kn} is also an appropriately defined sequence of positive integers which depends only on the sample size n . J. Van Ryzin (1973) has given conditions under which the above estimators are pointwise consistent. In this paper we establish conditions under which such a histogram density estimator is uniformly consistent almost surely. When the density has a unique mode, the results are used to obtain a strongly consistent estimator of the mode similar to that of Venter (1967).

Journal ArticleDOI
TL;DR: The problem of finding a 1xn transformation matrix B for which the probability of misclassification with respect to the one-dimensional transformed density functions was minimized was considered andoretical results are presented which give rise to a numerically tractable expression.
Abstract: The use of techniques for feature selection permits treatment of classification problems in spaces of reduced dimensions. A method is considered of linear feature selection for n-dimensional observation vectors which belong to one of two populations, where each population is described by a known multivariate normal density function. More specifically, the problem of finding a 1xn transformation matrix B for which the probability of misclassification with respect to the one-dimensional transformed density functions was minimized was considered. Theoretical results are presented which give rise to a numerically tractable expression for the variation in the probability of misclassification with respect to B. Using this expression a computational procedure is discussed for obtaining a B which minimizes the probability of misclassification. Preliminary numerical results are discussed.

Journal ArticleDOI
G. Parry1
TL;DR: In this paper, the theoretical form of the probability density function of the intensity in a polychromatic speckle pattern was considered and measurements using a multimode argon laser were compared with theory.
Abstract: We consider the theoretical form of the probability density function of the intensity in a polychromatic speckle pattern. Measurements using a multimode argon laser are compared with theory.

Proceedings Article
01 Jan 1975

Journal ArticleDOI
TL;DR: In this article, the mathematical properties of the Rayleigh distribution, including but not limited to the density function, moment generating function, the maximum likelihood estimator, confidence intervals, and the bivariate Rayleigh distributions, are discussed.
Abstract: The first part of the paper deals with the mathematical properties of the Rayleigh distribution, including but not limited to the density function, the moment generating function, the maximum likelihood estimator, confidence intervals, and the bivariate Rayleigh distribution. The second part deals with applications of the Rayleigh distribution to the analysis of the response of marine vehicles to wave excitation. These applications are illustrated with regard to ocean waves, short-and long- term bending moment responses of a ship, and the "target problem" applied to the relationship between wind speed, wave height, and stress.

Journal ArticleDOI
01 Jan 1975
TL;DR: In this article, the results of the measurements made by an ionization probe of the probability of reaction at many locations inside a cylindrical furnace using town gas were reported, and predictions were also obtained for the experimental conditions by numerical solution of a set of six simultaneous, elliptic, partial differential equations.
Abstract: Turbulent diffusion flames in a small, axisymmetrical cylindrical furnace are studied, using town gas. Results are reported, of the measurements made by an ionization probe of the probability of reaction at many locations inside the furnace. Predictions are also obtained for the experimental, conditions by numerical solution of a set of six simultaneous, elliptic, partial differential equations. The time-mean hydrodynamic characteristics are described by the four variables: stream function, vorticity, turbulence kinetic energy and the rate, of dissipation of turbulence energy; the reaction is described by the time-mean mixture fraction, f , and the mean-squared fluctuations of the mixture fraction, g . Instantaneous values of the mass fractions and the temperature are evaluated from the assumption that the variation with time of the instantaneous values of f is described by a random wave form which has a suitably clipped Gaussian distribution of the probability density. Effects on the time-mean density of the concentration and temperature fluctuations are accounted for. A method is suggested for calculating the probability of chemical reaction at points in the flow field. Computed results agree fairly well with measurements.

Journal ArticleDOI
TL;DR: In this paper, lower and upper Bayes-confidence bounds for the scale parameter were derived for the Weibull failure model when the parameter is being characterized by the inverted gamma and uniform probability density functions.
Abstract: The following bounds are derived for the Weibull failure model when the parameter is being characterized by the inverted gamma and uniform probability density functions (pdf); they are lower and upper Bayes-confidence bounds for the scale parameter and lower bounds for the reliability function. To illustrate the results a Monte Carlo simulation was performed to obtain 90% and 95% Bayes-confidence bounds for the scale parameter and lower limits for the corresponding reliability function of the Weibull failure distribution. The consequences that one encounters when the following ``wrong'' priors have been chosen to characterize the random behavior of the scale parameter are investigated. The results are compared when the prior pdf of ? is uniform but inverted gamma was used, and vice versa.