scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 1969"


Journal ArticleDOI
TL;DR: In this article, a simplified derivation of the Fokker-Planck equation is given and the uniqueness of the steady-state solution for certain classes of system is discussed.
Abstract: Nonlinear systems disturbed by Gaussian white noises (or by signals obtained from Gaussian white noises) can sometimes be analysed by setting up and solving the Fokker–Planck equation for the probability density in state space. In the present paper a simplified derivation of the Fokker–Planck equation is given. The uniqueness of the steady-state solution is discussed. Steady-state solutions are obtained for certain classes of system. These solutions correspond to or slightly generalize the Maxwell–Boltzmann distribution which is well known in classical statistical mechanics

137 citations


Journal ArticleDOI
TL;DR: The Cumulantexpansion model for thermal motion as mentioned in this paper is a statistical model without kinematic constraints and provides an unbiased estimate for the skewness of the density function of thermal motion.
Abstract: The usual crystallographic structure-factor equation, with three positional and six anisotropic-temperature-factor coefficients, assumes that the thermal-motion probability density function is centrosymmetric. However, phenomena such as libration and anharmonic vibration can cause skewness. In this study, ten more coefficients per atom representing the third cumulant of the probability density function for thermal-motion are added to the structure-factor equation to permit a determination of the nature of the skewness. The Edgeworth series expansion based on the normal probability density function is used to analyze the results. The equations are generalized to include also the fourth cumulant, which describes kurtosis. The `cumulant-expansion model' for thermal motion is a statistical model without kinematic constraints and provides an unbiased estimate for the skewness of the density function for thermal motion. Application of the model to neutron diffraction data from crystals containing methyl groups (which are undergoing torsional oscillation) confirms the assumption that the density functions for the hydrogen atoms of a methyl group are skewed as an arc about the axis of torsional oscillation. The model has not been applied with X-ray diffraction data; if it were, the resulting parameters would describe the skewness of the combined electron and thermal-motion probability density functions.

86 citations


Journal ArticleDOI
TL;DR: A method is discussed for obtaining an l -dimensional linear subspace of the observation space in which the l -variate marginal distributions are most separated, based on a nonparametric estimate of probability density functions and a distance criterion.
Abstract: Two groups of L -dimensional observations of size N_{1} and N_{2} are known to be random vector variables from two unknown probability distribution functions [1]. A method is discussed for obtaining an l -dimensional linear subspace of the observation space in which the l -variate marginal distributions are most separated, based on a nonparametric estimate of probability density functions and a distance criterion. The distance used essentially is the L_{2} norm of the difference between Parzen estimates of the two densities. An algorithm is developed that determines the subspace for which the distance between the two densities is maximized. Computer simulations are performed.

65 citations


Journal ArticleDOI
TL;DR: In this article, the case of randomly excited fluid turbulence is studied and it is argued that there is a strong mathematical analogy between the classical (turbulent) cascade of energy and the quantum field or many-body problem.
Abstract: The statistical mechanics of systems in which the dominant process is a flow of energy through the modes of the system is studied. The case of randomly excited fluid turbulence is studied and it is argued that there is a strong mathematical analogy between the classical (turbulent) cascade of energy and the quantum field or many-body problem. The energy has an analogue in the one-particle Green function, and entropy can be defined, the latter being the information content in the case of the probability distribution function for the turbulence. General operations in Hilbert space can be carried out with at most two functions, and the energy equation and the maximization of the entropy give two equations which determine the two chosen unknown functions. The case of a random long-wave input of energy is studied and shown to lead to the Kolmogoroff spectrum, and the Kolmogoroff constant is evaluated for the approximation system used.

63 citations


Journal ArticleDOI
TL;DR: It is shown that the least-square-error fit of the measured output signals of the systems offers a recursive formula which is a special case of the proposed algorithm, and the rate of convergence is computed.
Abstract: A stochastic approximation procedure that minimizes a mean-square-error criterion is proposed in this paper It is applied first to derive an algorithm for recursive estimation of the mean-square-error approximation of the function which relates the input signals and the responses of a memoryless system The input signals are assumed to be generated at random with an unknown probability density function, and the response is measured with an error which has zero mean and finite variance A performance index for evaluating the rate of convergence of the algorithm is defined and then the optimal form of the algorithm is derived It is shown that the least-square-error fit of the measured output signals of the systems offers a recursive formula which is a special case of the proposed algorithm A recursive formula for estimation of a priori probabilities of the pattern classes using unclassified samples is then presented The rate of convergence is computed A minimum square-error estimate of a continuous probability density function is also obtained by the same algorithm

47 citations


ReportDOI
12 Jun 1969
TL;DR: In this paper, the Weibull probability distribution function and its characteristics are discussed, and a method for generation of random variables that fit a selected weibull statistical population for use in radar simulations is also given.
Abstract: : Spatial distributions of the ground clutter backscatter coefficient sigma 0 for various types of terrain have been found to fit a Weibull distribution quite well. This report discusses the Weibull distribution quite well. This report discusses the Weibull probability distribution function and its characteristics, and presents some clutter measurements that demonstrate the Weibull distribution. The data presented include measurements taken at L-, S-, and X-band frequencies, at several depression angles, and for various resolution cell sizes. A method for generation of random variables that fit a selected Weibull statistical population for use in radar simulations is also given.

46 citations


Journal ArticleDOI
TL;DR: A class of methods for pattern classification using a set of samples of a type directly derived from concepts related to superposition, and it is shown that smooth potential functions exist that will separate arbitrary sets of sample points.
Abstract: This paper discusses a class of methods for pattern classification using a set of samples They may also be used in reconstructing a probability density from samples The methods discussed are potential function methods of a type directly derived from concepts related to superposition The characteristics required of a potential function are examined, and it is shown that smooth potential functions exist that will separate arbitrary sets of sample points Ideas suggested by Specht in regard to polynomial potential functions are extended

39 citations


Journal ArticleDOI
TL;DR: Several measures of system reliability that are applicable to a mission-oriented or time-dependent system are defined and an application of the formulas performed for the Naval Air Development Center, Johnsville, Pa., is discussed.
Abstract: Several measures of system reliability that are applicable to a mission-oriented or time-dependent system are defined: 1) the probability that the system is in a given state at the end of a given mission phase; 2) the probability density function of the random variable time spent in a state, given that the system has just transited into that state; 3) the probability of mission success. Equations for 1)-3) are derived, and an application of the formulas performed for the Naval Air Development Center, Johnsville, Pa., is discussed.

38 citations


Journal ArticleDOI
TL;DR: In this paper, the use of observations of a random function in space (random field) as independent variables in regression is considered including the numerical aspects, and the procedure involves a two stage modified principal component analysis.
Abstract: The use of observations of a random function in space (random field) as independent variables in regression is considered including the numerical aspects. Details are presented for obtaining a numerical approximation to a Karhunen-Loeve expansion when the random function is observed at a large number of points. The procedure involves a two stage modified principal component analysis. The dependent variable is then regressed on the principal components. An example from meteorology is presented. The random field is the 700 mb height surface observed at 505 points over the northern hemisphere. The dependent variable is the temperature at Washington, D. C. A by-product of the analysis is an estimate of the generalized spectrum and covariance function of the random field without assuming symmetry.

36 citations


Journal ArticleDOI
01 Oct 1969
TL;DR: In this paper, the probability of peak sidelobe level of a random array is obtained for any given probability density of element positions, based on the sampling theorem for band-limited functions.
Abstract: The probability of peak sidelobe level of a random array is obtained for any given probability density of element positions. The method is based on the sampling theorem for band-limited functions. These results are applicable to small as well as large arrays. In the case of large arrays, they coincide with those obtained earlier by Lo through a different argument. In addition, this method shows the effect of probabilistic distribution of the elements on the sidelobe level.

35 citations


ReportDOI
01 Oct 1969
TL;DR: In this paper, confidence limits for the expected value EX were found for all continuous distribution functions with (a, b) for known finite numbers a and b (a < b).
Abstract: : Consider a random variable X with a continuous cumulative distribution function F(x) such that F(a) = 0 and F(b) = 1 for known finite numbers a and b (a < b) The distribution function F(x) is unknown A sample of size n is drawn from this distribution Confidence limits for the expected value EX are to be found that hold for all continuous distribution functions with (a, b)


Journal ArticleDOI
TL;DR: In this article, Dodges and Lehmann showed that point and interval estimates derived from rank tests have asymptotic efficiency involving the functional Δ(F)=∫−∞∞ ƒ2(x)dx, where F is the population cdf with density f, could hardly be overemphasized.
Abstract: In nonparametric inference, the importance of the functional Δ(F)=∫−∞∞ ƒ2(x)dx, where F is the population cdf with density f, could hardly be overemphasized. It is a fundamental quantity involved in the expressions for the asymptotic efficiency of rank tests for many problems like location shift, regression, dependence, analysis of variance, etc. Also in some cases, point as well as interval estimates derived from rank tests have asymptotic efficiency involving the above functional. By a variational argument, Dodges and Lehmann

Book
01 Jan 1969
TL;DR: A monograph on state variable approach to continuous estimation of random processes with applications to analog communication theory is presented.
Abstract: Monograph on state variable approach to continuous estimation of random processes with applications to analog communication theory

Journal ArticleDOI
TL;DR: In this article, the probability density of a pair of heavy particles in a fluid of lighter particles is derived from the Liouville equation and proceeds by expansion in the ratio of light to heavy masses, using the technique previously applied successfully to the singlet distribution.
Abstract: The equation of evolution governing the probability density of a pair of heavy particles in a fluid of lighter particles is derived. The derivation starts from the Liouville equation and proceeds by expansion in the ratio of light to heavy masses, using the technique previously applied successfully to the singlet distribution.

Journal ArticleDOI
11 Aug 1969
TL;DR: In this paper, a method for direct numerical evaluation of the cumulative probability distribution function from the characteristic function in terms of a single integral is presented, and no moment evaluation or series expansions are required.
Abstract: A method for direct numerical evaluation of the cumulative probability distribution function from the characteristic function in terms of a single integral is presented. No moment evaluation or series expansions are required. Intermediate evaluation of the probability density function is circumvented. The method takes on a special form when the random variables are discrete.

Journal ArticleDOI
TL;DR: In this article, the distribution of the product of two independent stochastic variables whose density functions can be expressed as the products of any two special functions is derived. But the results in this paper are restricted to the case of two non-central F simple and multiple correlation coefficients.
Abstract: tions by analyzing the structure of density functions. This work was motivated by the paper [10] in which the distribution of the product of two non-central chi-square variables is obtained. Their problem arose from a problem in the physical sciences connected with the theory of spinstabilized rockets. It is pointed out in [101 that the original problem was to obtain the distribution of the product of a central Raleigh and a non-central Raleigh variable. By examining the structure of the density function of non-central Raleigh or non-central chi-square it is apparent that the density is the product of the special cases of two special functions. In [8, chapter 2] a lengthy treatment of Raleigh and associated distributions is given. We do not know any other particular problem in the physical sciences, but from the structural property of the problem pointed out in [10] it is quite likely that there may be a number of such problems in the physical sciences or in other disciplines, which are to be tackled. Hence we will give the distribution of the product of two independent stochastic variables whose density functions can be expressed as the product of any two special functions. Products of H-functions are used so that almost all classical density functions (central or non-central) will be taken care of, since H-functions are the most generalized special functions. Since a number of types of factors can be absorbed inside the H-function we think that all the distributions which are frequently used in the statistical theory of distributions will be included in this problem that we discuss here, with some modifications in some cases. Several special cases are pointed out so that one can easily get the distribution of the product or ratio of independent stochastic variables whose density functions are products of special functions. The result in [101 is obtained as a special case and further, the distributions of the products of two non-central F simple and multiple correlation coefficients are pointed out for the sake of mathematical interest because, structurally, the density functions in these cases belong to different categories. Since several properties of H-functions are available in the literature, it is easy to study other properties or to compute percentage points of the product distribution discussed in this paper. The approach of examining the structure of densities may simplify the problem of obtaining distributions of several statistics. 2. Some definitions and basic results. In this section some definitions and some preliminary results which are used in the derivation of the distribution of a

Patent
07 Jul 1969
TL;DR: In this paper, an online real-time instrument for measurement of auto and cross correlation, amplitude probability distribution and amplitude probability density distributions of random analog signals and for measurement average signal response characteristics is presented.
Abstract: An online real-time instrument for measurement of auto and cross correlation, amplitude probability distribution and amplitude probability density distributions of random analog signals and for measurement of average signal response characteristics A scanning averager utilized in all the measurements includes capabilities for adapting its time constant to differing clock rates and for selecting its time constant at will for longer and shorter averaging times and for controlling the range of selfadaptivity The correlation circuitry includes special timing controls for both basic and high frequency modes and combines analog and digital circuit design The latter is accompanied by provision of a synchronized pseudo-random noise source to assure uniform probability density distribution for the full scale range of the input signal Conservation of circuitry is achieved through use of the same circuitry for the different measurements of which the instrument is capable Special logic is included for enhancement of amplitude probability distribution measurements

Journal ArticleDOI
TL;DR: The forward Kolmogorov equation as discussed by the authors is a variant of the Fokker-Planck equation that is used to describe the change in a gene frequency in a Markovian fashion.
Abstract: A problem of interest to many population geneticists is the process of change in a gene frequency. A popular model used to describe the change in a gene frequency involves the assumption that the gene frequency is Markovian. The probabilities in a Markov process can be approximated by the solution of a partial differential equation known as the Fokker-Planck equation or the forward Kolmogorov equation. Mathematically this equation is where subscripts indicate partial differentiation. In this equation, f(p, x; t) is the probability density that the frequency of a gene is x at time t, given that the frequency was p at time t = o. The expressions MΔX and VΔx are, respectively, the first and second moments of the change in the gene frequency during one generation. A rigorous derivation of this equation was given by Kolmogorov (1931).

Journal ArticleDOI
William C. Y. Lee1
TL;DR: In this article, a comparison of the statistical properties of energy density reception of mobile radio signals with those of multichannel predetection combining diversity reception using an array of whip antennas lined up transverse to the direction of motion was made.
Abstract: A comparison is made of the statistical properties of energy density reception of mobile radio signals with those of multichannel predetection combining diversity reception using an array of whip antennas lined up transverse to the direction of motion. Theoretically, the energy density system is about as effective as a 3-channel diversity system in reducing fading. This conclusion was obtained from cumulative distribution function (cdf) curves, level crossing rate curves, and power spectra density function curves. Experimentally, the probability distribution curves show that the variation of the signal amplitude of a 4-channel diversity system is less than that of an energy density signal, but the fading rates of the two signals are almost the same. The experimental power spectra curves verify that the shape of the output spectrum for the diversity receiver is independent of the number of channels, but the level is reduced in magnitude with more channels. The output spectrum for the energy density system is more concentrated at lower frequencies than the diversity receiver.

Patent
25 Feb 1969
TL;DR: An improved flexible fatigue testing apparatus for stressing a specimen yields a probability density of stress load amplitudes that is controlled in accordance with a random binomial pulse sequence having a variable probability of pulse occurrence and/or a variable pulse repetition rate as mentioned in this paper.
Abstract: An improved flexible fatigue testing apparatus for stressing a specimen yields a probability density of stress load amplitudes that is controlled in accordance with a random binomial pulse sequence having a variable probability of pulse occurrence and/or a variable pulse repetition rate. Analog control voltages uniquely corresponding to the pulse patterns occurring during successive test intervals operates a hydraulically driven ram coupled to the specimen. The ram imparts to the specimen a load having an amplitude determined by the then-occurring control signal.

Journal ArticleDOI
TL;DR: This paper studies a particular random tour continuous random walk, where a particle moves in two dimensions at constant speed by choosing successive travel directions that are independent and uniformly distributed between 0 and 2π, with the lengths of the steps between direction changes being independent, exponentially distributed, random variables.
Abstract: This paper studies a particular random tour continuous random walk: A particle moves in two dimensions at constant speed by choosing successive travel directions that are independent and uniformly distributed between 0 and 2π, with the lengths of the steps between direction changes being independent, exponentially distributed, random variables. An analytic expression for the probability density of the particle's position after a time t is derived, and an application is made to a military situation where the "particle" is a target trying to escape being destroyed by an unseen enemy. The bulk of the paper is devoted to deriving the density function.

Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo study was performed, computing coherences and confidence intervals upon non-Gaussian time series using both a rectangular distribution and a x2 distribution with one degree of freedom.
Abstract: Previous work on computation of coherence estimates between two time series and the confidence intervals about these estimates has always assumed that the time series have a Gaussian probability density function. Here a Monte Carlo study was performed, computing coherences and confidence intervals upon non-Gaussian time series. Using both a rectangular distribution and a x2distribution with one degree of freedom, the results appear to justify the notion that the assumption of a Gaussian distribution has a fairly small importance in the computation of the above statistics.

Journal ArticleDOI
TL;DR: In particular, Longuet-Higgins as mentioned in this paper obtained series expressions which are variants and generalizations of that given by Rice (1945, equation 3.4-11) for the time between an arbitrary upcrossing of zero to the (r+ l)st subsequent downcrossing.
Abstract: MANY authors-particularly in the physical and engineering sciences-have considered the problem of obtaining the distribution of the time between successive axiscrossings by a stochastic process. This interest dates back to the pioneering work of S. 0. Rice (1945), who developed a series expression and used a very simple approximation for the case of axis-crossings by a stationary normal process. More generally, one may consider the distribution of the time between an axiscrossing and the nth subsequent axis-crossing, or between, say, an upcrossing of the axis and the nth subsequent downcrossing. Problems of this type have been discussed by Longuet-Higgins (1962), with particular reference to the normal case. In particular he obtains series expressions which are variants and generalizations of that given by Rice (1945, equation 3.4-11). For example, Longuet-Higgins calculates (by somewhat heuristic methods) a series for the probability density for the time between an "arbitrary" upcrossing of zero to the (r+ l)st subsequent upcrossing. (A precise definition of what is meant by such a density is usually not given in the relevant literature. This difficulty, and one method of overcoming it, will be discussed in Section 2.) The series just referred to may be written in the form

Journal ArticleDOI
TL;DR: In this article, a 25-mile wholly refracted (RRR) path for 48 hours was used to record the arrival time and amplitude of acoustic pulses at 11 vertically spaced hydrophones.
Abstract: A propagation experiment was conducted over a 25‐mile wholly refracted (RRR) path for 48 h in October 1967. The data recorded during the experiment consisted of both the arrival time and the amplitude of acoustic pulses at 11 vertically spaced hydrophones. The source and the receivers were fixed, and the medium was sampled every tenth of an hour with a 10‐msec pulse at 800 Hz. The propagation path between source and receiver was purely refractive and amounted to a complete cycle of RRR type of propagation. The means, variances, autocorrelation, autospectral density, cross correlation, and coherence function were estimated for both the amplitude and phase data. The probability distribution function for both amplitude and phase is demonstrated to be nearly Gaussian. An interrelationship between the time and space characteristics of the random inhomogeneities of the medium is illustrated. Wherever possible, the measured results are compared with the theoretical values of Chernov (1960). A comparison of the t...

16 Sep 1969
TL;DR: In this article, the problem of estimating seismicity and the performance of a system which detects earthquakes is formulated in such a way that maximum likelihood estimation can be applied, and procedures to obtain maximum likelihood estimates of a, b, and the error function mean and variance factors are derived, discussed, and applied to experimental data to check the relevance of the theoretical development.
Abstract: : The problem of estimating seismicity and the performance of a system which detects earthquakes is formulated in such a way that maximum likelihood estimation can be applied. The mean number of earthquakes which occur in a fixed time interval is assumed to be of the form exp (a-bm) where m is magnitude and a and b are constants. The probability of detection as a function of m is taken to be an error function. Procedures to obtain maximum likelihood estimates of a, b, and the error function mean and variance factors are derived, discussed, and applied to experimental data to check the relevance of the theoretical development.

Journal ArticleDOI
TL;DR: In this article, a probability function giving the effect of experimental geometry on the distribution of Doppler velocities for γ-photons emitted from the surface of a disc-shaped source is derived.

Journal ArticleDOI
TL;DR: In this article, the orthogonal components of the radius of gyration, Si, of a random flight chain were computed using an indirect numerical integration algorithm which was based upon a multiple convolution proposed by Forsman and Hughes.
Abstract: Probability density functions for the orthogonal components of the radius of gyration, Si, of a random‐flight chain were computed using an indirect numerical integration algorithm which was based upon a multiple convolution proposed by Forsman and Hughes. Computations were performed for relatively long chains of 100–1000 statistical segments t. By using a reduced radius of gyration ξi = (π / 23) Si〈Si2〉−1 / 2 the distributions converged to a limiting function at about t = 500, except for extremely small radii, ξi ≤ [10(2t)1/2]−1. Additional computations using the one‐dimensional distributions yielded the corresponding probability densities for both the two‐ and three‐dimensional (polar) radii. The computed functions were in excellent agreement with all their limiting properties previously determined by analytical methods.

Journal ArticleDOI
TL;DR: It is shown that the optimum threshold and the probability of error of the system can be accurately estimated by using EVT to obtain properties of the initial probability density functions on their "tails."
Abstract: The use of extreme-value theory (EVT) in the detection of a binary signal in additive, but statistically unknown, noise is considered. It is shown that the optimum threshold and the probability of error of the system can be accurately estimated by using EVT to obtain properties of the initial probability density functions on their "tails." Both constant signals and slowly fading signals are considered. In the case of a fading signal, the detector becomes adaptive. Detection of the constant signal, both with and without an initial learning period, is studied by computer simulation.

Journal ArticleDOI
TL;DR: In this article, a Lagrangian formalism is developed and from it the general equations of motion are derived, which turn out to be formally equivalent to the Schrodinger equation.
Abstract: The motion of a particle is studied under the action of two types of forces, some slowly varying and others rapidly fluctuating. The slowly varying forces are assumed to be given in every particular problem. The rapidly fluctuating forces, which are unknown, are assumed to have a random character. It is shown that the motion of the particle depends on the random forces only through the diffusion effect that they produce. The theory is statistical in character and only the evolution of a probability density can be determined. A Lagrangian formalism is developed and from it the general equations of motion are derived. These turn out to be formally equivalent to the Schrodinger equation. The generalization to a system of particles is straightforward if it is assumed that the random forces act independently and with like intensity on every elementary particle. The expectation values of the fundamental dynamical variables are obtained. The theory proves to be very similar to, but not fully identical with, nonrelativistic quantum mechanics without spin.