scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 1977"


Journal ArticleDOI
TL;DR: In this article, a unified statistical analysis of premixed turbulent flame supported by a single-step global reaction is presented, where a set of time-averaged balance equations derived from the exact equations of reacting turbulent flow under a thin shear layer, fast chemistry approximation are employed.

417 citations


Journal ArticleDOI
TL;DR: In this paper, a set of families of distributions which might be useful for fitting data was described by Burr and special attention was focused on the family, Type XII, with generic distribution function 1- (1 + Xc)-k (X > 0) which yields a wide range of values of skewness, 1fl1, and kurtosis, fi2.
Abstract: SUMMARY A set of families of distributions which might be useful for fitting data was described by Burr (1942). Special attention was focused on the family, Type XII, with generic distribution function 1- (1 + Xc)-k (X > 0) which yields a wide range of values of skewness, 1fl1, and kurtosis, fi2. The area in the (Ifi1, /2) plane corresponding to the Type XIT distributions is derived and presented in two figures. 1. INTRODIUCTION Suppose that Z is a positive random variable with probability density function

281 citations


Journal ArticleDOI
TL;DR: A closed solution is presented showing the composite probability distribution of power levels derived from short term Rayleigh fading with superimposed long term lognormal variations of mean value.
Abstract: A closed solution is presented showing the composite probability distribution of power levels derived from short term Rayleigh fading with superimposed long term lognormal variations of mean value. An example shows how the results can be applied to the prediction of bit error rates in a mobile radio data transmission channel, and how the error rate will vary with standard deviation of the lognormal distribution.

259 citations


Journal ArticleDOI
TL;DR: This paper introduces a new methodology for obtaining the stationary waiting time distribution in single-server queues with Poisson arrivals by exploiting the observation that the stationary density of the virtual waiting time can be interpreted as the long-run average rate of downcrossings of a level in a stochastic point process.
Abstract: This paper introduces a new methodology for obtaining the stationary waiting time distribution in single-server queues with Poisson arrivals. The basis of the method is the observation that the stationary density of the virtual waiting time can be interpreted as the long-run average rate of downcrossings of a level in a stochastic point process. Equating the total long-run average rates of downcrossings and upcrossings of a level then yields an integral equation for the waiting time density function, which is usually both a linear Volterra and a renewal-type integral equation. A technique for deriving and solving such equations is illustrated by means of detailed examples.

136 citations


Journal ArticleDOI
TL;DR: In this paper, a random-matrix model is used to describe the transformation of kinetic energy of relative motion into intrinsic excitation energy typical of a deeply inelastic heavy-ion collision.

118 citations



Journal ArticleDOI
Masuo Suzuki1
TL;DR: In this paper, a scaling theory of transient phenomena is formulated near the instability point for the moments of the relevant macrovariable, for the generating function, and for the probability distribution function.
Abstract: A general scaling theory of transient phenomena is formulated near the instability point for the moments of the relevant intensive macrovariable, for the generating function, and for the probability distribution function. This scaling theory is based on a generalized scale transformation of time. The whole range of time is divided into three regions, namely the initial, scaling, and final regions. The connection procedure between the initial region and the scaling region is studied in detail. This scaling treatment has overcome the difficulty of divergence of the variance for a large time which was encountered in the Ω-expansion, and this scaling theory yields correct values of moments to order unity for an infinite time. Some instructive examples are discussed for the purpose of clarifying the concepts of the scaling theory.

97 citations


Journal ArticleDOI
TL;DR: In this paper, the Fourier integral estimate for a probability density is compared to that of the minimum M.I.S.E., and it is found to be asymptotically optimal.
Abstract: The rate at which the mean integrated square error decreases as sample size increases is evaluated for general $L^1$ kernel estimates and for the Fourier integral estimate for a probability density. The rates are compared to that of the minimum M.I.S.E.; the Fourier integral estimate is found to be asymptotically optimal.

93 citations


Journal ArticleDOI
TL;DR: The experimental study has been conducted to verify the theoretical study of laser speckle statistics, and its results are in good agreement with the theoretical ones.
Abstract: The statistical properties (i.e., the probability density function and the average contrast) of laser speckle produced by a weak diffuse object in the diffraction field have been theoretically and experimentally studied with the assumption of the Gaussian statistics for the formation of speckles. The general formulas for the probability density function and the average contrast, which are valid for an entire range of object surface and for the whole diffraction field, are introduced, and their special cases, which have been studied in the past, are derived and discussed. These formulas for the probability density function and the averagae contrast are actually evaluated in the diffraction field of a weak diffuse object illuminated by the Gaussian laser beam. The experimental study has been conducted to verify the above theoretical study, and its results are in good agreement with the theoretical ones. The circularity and noncircularity of the speckle statistics in the diffraction field are discussed on the basis of the theoretical study.

65 citations


Journal ArticleDOI
TL;DR: In this article, variable density effects on the Prandtl-Kolmogoroff model for the eddy transport coefficient and on the scalar dissipation are shown to bring the predictions of the theory in more satisfactory agreement with experimental results for the orientation of the combustion wave in strong interaction.
Abstract: Earlier calculations of planar, oblique, and normal combustion waves based on the Bray-Moss model of premixed turbulent flows have been improved and the theory exploited further. Variable density effects on the Prandtl-Kolmogoroff model for the eddy transport coefficient and on the scalar dissipation are shown to bring the predictions of the theory in more satisfactory agreement with experimental results for the orientation of the combustion wave in the case of strong interaction, the case most amenable to comparison. The theory is used to compare the distributions through the reaction zone of several statistical quantities as given by conventional and Favre averaging; large differences are found in some quantities with the implication that different modeling to achieve closure may be required for the two means of averaging when the heat release, and therefore variable density effects, are significant. In particular it is found that the triple correlation terms, which enter the momentum and scalar fluxes according to conventional averaging, are the principal source of the differences in the fluxes as given by the two means of averaging.

61 citations


Journal ArticleDOI
01 Mar 1977
TL;DR: In this article, it is shown that the counting circles on a sphere for constructing density diagrams are not circles on an equal-area (Lambert) projection of such a sphere; the correct curves are presented.
Abstract: On the Theory of the Evaluation of Joint Orientation Measurements The current procedure for the evaluation of sets of joint-orientation measurements in terms of regional stresses is critically reviewed. It is shown that the counting circles on a sphere for constructing density diagrams are not circles on an equal-area (Lambert) projection of such a sphere; the correct curves are presented. It is shown that the search for modal maxima in the density diagrams corresponds to a nonparametric statistic for which it is difficult to give confidence limits. General considerations about the statistics of experiments yield the result that 12–15 joint measurements in an outcrop are sufficient to define 3 maxima, if such maxima exist at all. In order to give confidence limits, it is best to introduce a parametric model for the distribution of joint orientations. For a single cluster of orientations the probability density function is chosen as proportional to exp (k2 cos2θ) whereθ is the polar deviation angle from the “mean” direction; since the basic joint-orientations havethree fundamental directions, three probability density functions of the above type have to be superposed. These are determined by giving 11 parameters which can be determined by a function-minimization procedure from a given set of measurements. This procedure is best carried out on a computer. In this fashion, confidence limits for the preferred joint directions can be obtained. For the determination of the stresses, only two preferred sets of joints are required reducing the necessary number of parameters to 7. According to the usual theory of the statistics of trials, it is seen that 21 measurements are required to determine them. Thus, using a parametric statistic, it is seen that 21 measurements of joint orientations in an outcrop should suffice to determine the stress field that produced them. If the sharpness of the distributions is assumed as a priori known, this reduces again to the 15 or 20 measurements required for a non-parametric evaluation.

Journal ArticleDOI
TL;DR: In this paper, the joint probability density function of the projections was used to derive the reconstruction scheme which is optimum in the maximum likelihood sense, and it was shown that for an average number of counts detected in excess of approximately 100 per projection, the image is essentially unbiased.
Abstract: The stochastic nature of the projections used in transmission image reconstruction has received little attention to date. This paper utilizes the joint probability density function of the projections to derive the reconstruction scheme which is optimum in the maximum likelihood sense. Two regimes are examined: that where there is significant probability of a zero count projection, and that where the zero count event may be safely ignored. The former regime leads to a complicated algorithm whose performance is data dependent. The latter regime leads to a simpler algorithm. Its performance, in terms of its bias and variance, has been calculated. It is shown that, for an average number of counts detected in excess of approximately 100 per projection, the image is essentially unbiased, and for counts in excess of approximately 2500 per projection, the image approximately attains the minimum variance of any reconstruction scheme using the same measurements.

Journal ArticleDOI
TL;DR: In this paper, the first two moments of a probability density characterizing an ensemble of possible true states were extracted from radiosonde observations of the 500 mb geopotential height field.
Abstract: The technique of stochastic dynamic prediction proposed by Epstein is applied to atmospheric data. The motivation for the approach is discussed and a review is given of the development of the stochastic dynamic equations which, subject to the third-moment discard approximation, describe the evolution of the first two moments of a probability density characterizing an ensemble of possible true states. The method of “least squares” is used to extract the moments directly from radiosonde observations of the 500 mb geopotential height field. Approaching the analysis problem from a Bayesian standpoint leads to a weighted average of the new observations and the forecast, the appropriate weighting for the latter being supplied by the stochastic forecast itself. The basic physical model employed is a spectral form of the equivalent barotropic. The effects of the simplicity of the dynamical model on the growth of error (external error growth) must be considered explicitly when making stochastic forecasts,...

Journal ArticleDOI
TL;DR: The complete statistical description of a first-order correlative tracking system with periodic nonlinearity is shown to be embedded in a renewal process and the time-dependent probability density function of the phase error is computed.
Abstract: The complete statistical description of a first-order correlative tracking system with periodic nonlinearity is shown to be embedded in a renewal process. The time-dependent probability density function of the phase error, as well as the distribution of the cycle slips, is computed. The use of the renewal process approach makes it possible for the first time to compute the distribution of the positive and negative number of cycle slips within a given time interval. This information is sufficient to determine the probability density function of the absolute phase error.

Journal ArticleDOI
Irving Kanter1
TL;DR: In this paper, the probability density function (pdf) of the monopulse ratio when N independent samples of difference and sum signals are processed in a maximum likelihood receiver is derived, and its bias and variance for various jam-to-noise ratios, locations of the centroid with respect to the boresight direction, and number of samples processed are presented.
Abstract: When a radar with amplitude comparison monopulse arithmetic encounters signals from multiple Gaussian sources it will "point" to the centroid of the incident radiation. The probability density function (pdf) of the monopulse ratio when N independent samples of difference and sum signals are processed in a maximum likelihood receiver is derived. For finite jam-to-noise ratio the estimate has a bias which is independent of N. The variance in the estimate does however depend upon N. Central moments of order less than or equal 2N - 2 exist and are given by a simple formula. Plots of the pdf and its bias and variance for various jam-to-noise ratios, locations of the centroid with respect to the boresight direction, and number of samples processed are presented in the accompanying figures.

Journal ArticleDOI
TL;DR: Necessary conditions for the optimal searching tracks for a moving target are presented and a parameter vector having a known probability distribution function is presented.
Abstract: Necessary conditions for the optimal searching tracks for a moving target are presented. The motion of the target is determined by a parameter vector having a known probability distribution function.

Journal ArticleDOI
TL;DR: In this article, two methods of uncertainty anaylsis, called the response surface method and the crude Monte Carlo method, were compared for the probability density function of the peak cladding temperature as computed by a simplified nuclear code that was subjected to seven uncertainty parameters.
Abstract: A demonstration of two methods of uncertainty analysis was carried out to assess their utility for future use in treating computer models of nuclear power systems. The two methods of uncertainty anaylsis, called the response surface method and the crude Monte Carlo method, produced comparable results for the probability density function of the peak cladding temperature as computed by a simplified nuclear code that was subjected to seven uncertainty parameters. From these density functions, the upper cumulative tail probabilities were obtained and were shown to be measures of parameter margin. The response surface method provides sensitivity coefficients and also an inexpensive framework for evaluating the effects of the various assumptions inherent in the method. The crude Monte Carlo method provides no sensitivity coefficients and requires a complete rerun if a single uncertainty input density should be changed. The response surface method is recommended for use, where economically feasible, since the advantages of the method far outweigh the disadvantages.

Journal ArticleDOI
TL;DR: In this paper, the discharge at which levees fail is assumed to be a random variable with a particular distribution, and the results on the flood frequency curve are presented for two conditions: one for the failure discharge having a uniform probability density function and the other for failure discharges having a quadratic distribution.
Abstract: In most studies of flood levee reliability the probability of failure is taken to be equal to the probability that the flood peak discharge will exceed the capacity of the channel-levee system. Such an analysis assumes that levees fail only by overtopping, whereas in many cases, levees fail at discharges much below the channel capacity. In the present paper the discharge at which levees fail is assumed to be a random variable with a particular distribution. The results on the flood frequency curve are presented for two conditions: one for the failure discharge having a uniform probability density function and the other for failure discharges having a quadratic distribution. In a similar manner the effects on the net benefits from levee construction were also analyzed and results are given.


Journal ArticleDOI
TL;DR: In this paper, the theoretical distribution of seed numbers (a probability density function) that would be expected on the ground around a parent plant that disperses its seeds explosively is calculated.
Abstract: SUMMARY We calculate the theoretical distribution of seed numbers (a probability density function) that would be expected on the ground around a parent plant that disperses its seeds explosively. The initial velocity of the projected seed, and its air resistance are shown to have considerable influence on the maximum dispersal distance, as well as on the angle (the optimum angle of projection) which attains in. All other angles give rise to lesser dispersal distances and their relative frequencies contribute to the shape of the probability density function. The height of release has little influence on the maximum distance, but radically affects the shape of the probability density function. Some practical consequences and biological implications of these results are discussed.

Journal ArticleDOI
TL;DR: The exact probability density function for themonopulse ratio, of an amplitude-comparison monopulse radar is presented in closed form and the exact results are compared to the linear theory when the signal-to-noise ratio is "small."
Abstract: The exact probability density function for the "monopulse ratio, of an amplitude-comparison monopulse radar is presented in closed form. The analysis is valid for multiple looks at any combination of fixed targets, pulse-to-pulse independently fluctuating targets, and receiver noise. The average receiver noise powers in the difference and sum channels need not be equal. Arbitrary signal-to-noise ratio, arbitrary monopulse ratio versus angle characteristic, and arbitrary locations of the targets and jammers in the beam are accommodated. Plots of the density function for various signal-to-noise ratios and off-axis location of the targets are included. A linear approximation is introduced, and the exact results are compared to the linear theory when the signal-to-noise ratio is "small."

Journal ArticleDOI
TL;DR: In this article, an approximation to the stationary joint density function of the displacement and velocity response is derived for oscillators with non-linear damping, excited by white noise, by reducing the basic two-dimensional Fokker-Planck equation for the transition density function to a one-dimensional equation relating to the energy envelope of the response.


Journal ArticleDOI
TL;DR: Shpilberg et al. as mentioned in this paper presented a sumary review of work in the area of modeling the probability distribution of fire loss amount, and tried to illustrate how probabilistic arguments relating to the physical nature of the fire growth process can aid analysts in their choice of an appropriate model for the probability distributions of fire losses amount.
Abstract: Theoretical distributions frequently used to model fire loss amount are discussed. The problem of selecting models solely on the basis of statistics is addressed. Use of probabilistic arguments applied in Reliability theory to infer the type of probability distribution, is explored. The concept of failure rate of a fire is discussed and used to explore implications of the Pareto and Lognormal models as to the fire growth phenomenon. It is concluded that probabilistic arguments, regarding the nature of the fire growth process can aid analysts in their choice of an appropriate model for the probability distribution of fire loss amount. It is a basic assumption in all actuarial research and risk theory studies that there is a probability distribution of loss amount underlying the risk process. In other words, if a loss occurs, there is the probability S(x) that the loss will be for an amount less than or equal to x. In theoretical studies this distribution often is presented as continuous, having a derivative S'(x) s(x), which is called the probability density function of fire loss amount.' At a certain point in time, the results of the theoretical work have to be applied to practical situations. For example, the distribution of actual losses experienced by an insurer is then considered as a sample from an underlying model without defining the corresponding distribution, the characteristics of which are taken to agree with the corresponding statistics of the observed distribution. Many results can be obtained simply by using these sample statistics. However, it is often more desirable to work with analytically defined loss distributions, and the statistics are then used to establish suitable values of the parameters involved in the analytical distributions. When working with these analytical distributions, the researcher must make use of properties of the distributions other than those covered by the statistics observed. There are two main aspects of general insurance in which a knowledge David Shpilberg, Ph.D., is Associate Professor of Operations Research and Insurance at the Instituto de Estudios Superiores de Administraci6n (IESA), a Graduate School of Management in Caracas, Venezuela. The research for this paper was partly financed by a grant of the Factory Mutual Research Corporation. The paper was presented at the 1976 annual meeting of The American Risk and Insurance Association. Dr. Shpilberg received the 1975 Journal of Risk and Insurance Award for the best paper published in the 1975 issues. ( 103) This content downloaded from 157.55.39.163 on Wed, 21 Sep 2016 05:15:04 UTC All use subject to http://about.jstor.org/terms 104 The Journal of Risk and Insurance of the structure of the elements of risk variation is needed:2 first, in the rate making process; and second, in dealing with the question of financial stability (monetary risk). Traditional methods of rate making are based only on an estimate of the mean expected loss. Financial stability studies (e.g., studies addressing the probability of ruin of an insurer or evaluating the risk of unbearable monetary loss for a corporation which chooses not to insure its property) usually are based on an estimate of the variance of the possible loss. However, in the area of industrial fire losses, the probability distributions involved are markedly skewed in character (very small probabilities of a very large loss). Knowledge of its higher moments (in essence, the shape of the tail of the distribution) becomes essential if meaningful quantitative estimates of risk are to be made. Most often, this step involves assumptions regarding the behavior of losses larger than those observed in the sample of available loss experience. Thus, unless there is some theoretical support (not merely observed statistics) for an inference that a particular type of probability distribution is a more reasonable model for the distribution of fire loss amounts as a function of size, inferences derived for any region of the distribution outside the available data will be no better than a straight extrapolation on the data. This paper presents a sumary review of work in the area of modeling the probability distribution of fire loss amount, and attempts to illustrate how probabilistic arguments relating to the physical nature of the phenomenon (an approach extensively used in life testing of material failure and in reliability analysis of systems' components) can effectively aid in the choice of an appropriate model. Fire Loss As a Stochastic Process The total amount of fire losses in a given period can be modeled as a risk process characterized by two stochastic variables: the number of fires and the amount of the losses. If, Pr (t) Probability of r losses in the observed period, t S(x) =Probability that, given a fire loss, its amount is ? x S*r(x) rth convolution of the distribution function of fire loss amount, S (x), then the probability (see Figure 1) that total loss in a period of length t, is -? x can be expressed as

Journal ArticleDOI
TL;DR: A new form of the probability density function is given for the sum of n uncorrelated, partially developed Speckle patterns under the assumption that the individual speckle fields to be added follow the circular statistics.
Abstract: The probability density function and average contrast of the sum of n uncorrelated, partially developed speckle patterns have been theoretically investigated. A new form of the probability density function is given for the sum of n uncorrelated, partially developed speckle patterns under the assumption that the individual speckle fields to be added follow the circular statistics.

Journal ArticleDOI
TL;DR: In this paper, a statistical study of microscale magnetic fluctuations in the interplanetary and magnetosheath region during quiet conditions is approached from the concept of probability distribution function.
Abstract: A statistical study of microscale magnetic fluctuations in the interplanetary and magnetosheath region during quiet conditions is approached from the concept of probability distribution function. Magnetic field data from Explorer 34 were used to reconstruct the distribution functions and to calculate some of their moments. The distribution functions are found to be nearly tri-Maxwellian as the background field is relatively quiet. The direction of maximum fluctuations is found to be nearly perpendicular to that of the background magnetic field, but the fluctuations are rarely circularly polarized. Across the Earth's bow shock, the degree of fluctuation anisotropy increases, but no noticeable change in relative fluctuation intensity has been observed.

Journal ArticleDOI
TL;DR: Using extended forms of the Fokker-Planck-Kolmogorov equation, the so-called vth-order equations, a general expression is derived for p(y) and some specific eases are investigated, which are applied to find the average number of zero and level-crossings per unit time of the output process.
Abstract: We consider the probability density function p(y) of the output y(t) of the first-order non linear system [ydot] + β ƒ(y) = βx, where x = x(t) is the random telegraph signal and ƒ(·) is a non-linear function Employing extended forms of the Fokker-Planck-Kolmogorov equation, the so-called vth-order equations, a general expression is derived for p(y) and some specific eases are investigated These results are applied to find the average number of zero and level-crossings per unit time of the output process

Proceedings ArticleDOI
J. Gosselin1
01 May 1977
TL;DR: An approximation for the Probability Density function (PDF) of the Magnitude-Squared Coherence (MSC) estimate is presented, and the detection performance of an MSC estimate processor is analyzed.
Abstract: An approximation for the Probability Density function (PDF) of the Magnitude-Squared Coherence (MSC) estimate is presented. The analysis is valid for the case of two ergodic Gaussian random processes, partitioned into n d independent data segments. The Probability of False Alarm (P fa ) is related directly to the decision threshold. Also, the true MSC is given as a function of signal-to-noise ratio (SNR), for equal and nonequal SNR conditions in each sensor. Associated plots are included. The detection performance of an MSC estimate processor is analyzed. This investigation is valid when Gaussian noise inputs to the two sensors are uncorrelated, and have the same auto-spectral density. The Probability of Detection (P D ) is plotted against the SNR for certain values of n d between 26 and 8000. Also, the detection performance of the coherence estimator (two-sensor) is compared to that of a single-sensor square law detector, assuming a narrowband Gaussian input signal, with rectangular spectrum. Curves of P D vs SNR, for the processors, are presented together.

Journal ArticleDOI
TL;DR: In this paper, a radar technique has been developed for measuring the statistical height properties of a random rough surface, which is being applied to the problem of measuring the significant wave height and probability density function of ocean waves from an aircraft or spacecraft.
Abstract: A radar technique has been developed for measuring the statistical height properties of a random rough surface. This method is being applied to the problem of measuring the significant wave height and probability density function of ocean waves from an aircraft or spacecraft. Earlier theoretical and laboratory results have been extended to define the requirements and performance limitations of flight systems. Some details of the current airborne radar system are discussed and results obtained on several experimental missions are presented and interpreted.

Journal ArticleDOI
TL;DR: Two classes of probability densities, the exponential Fourier densities and the exponential trigonometric density, are introduced on the unit sphere, as well as four kinds of displacements, and an error criterion for direction estimation is presented.
Abstract: Two classes of probability densities, the exponential Fourier densities and the exponential trigonometric densities, are introduced on the unit sphere, as well as four kinds of displacements. In general, neither class is closed under the operation of taking conditional distributions with respect to any of the displacements. A combined usage of both classes is required to study the estimation and detection models obtained from various combinations of the displacements. The merits and disadvantages of each model are discussed. Recursive formulas for the conditional densities and the likelihood ratios are derived for many of the models. The additive measurement noise case is also considered in detail. An error criterion for direction estimation is presented with respect to which the optimal estimates can be easily computed from the probability distribution. A deficiency of the models and techniques developed in this paper is that random driving terms are disallowed in the signal processes.