scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 1972"


MonographDOI
01 Jan 1972
TL;DR: In this article, the probability density, Fourier transforms and characteristic functions, joint statistics and statistical independence, Correlation functions and spectra, the central limit theorem, and the relation functions are discussed.
Abstract: This chapter contains sections titled: The probability density, Fourier transforms and characteristic functions, Joint statistics and statistical independence, Correlation functions and spectra, The central limit theorem

3,260 citations


Journal ArticleDOI
TL;DR: In this paper an approximation that permits the explicit calculation of the a posteriori density from the Bayesian recursion relations is discussed and applied to the solution of the nonlinear filtering problem.
Abstract: Knowledge of the probability density function of the state conditioned on all available measurement data provides the most complete possible description of the state, and from this density any of the common types of estimates (e.g., minimum variance or maximum a posteriori) can be determined. Except in the linear Gaussian case, it is extremely difficult to determine this density function. In this paper an approximation that permits the explicit calculation of the a posteriori density from the Bayesian recursion relations is discussed and applied to the solution of the nonlinear filtering problem. In particular, it is noted that a weighted sum of Gaussian probability density functions can be used to approximate arbitrarily closely another density function. This representation provides the basis for procedure that is developed and discussed.

1,267 citations


Proceedings ArticleDOI
Kai-ching Chu1
01 Dec 1972
TL;DR: It is shown in this paper that this class of densities can be expressed as integrals of a set of Gaussian densities and it is proved that the conditional expectation is linear with exactly the same form as the Gaussian case.
Abstract: A random variable is said to have elliptical distribution if its probability density is a function of a quadratic form. This class includes the Gaussian and many other useful densities in statistics. It is shown in this paper that this class of densities can be expressed as integrals of a set of Gaussian densities. This property is not changed under linear transformation of the random variables. It is also proved in this paper that the conditional expectation is linear with exactly the same form as the Gaussian case. Many estimation results of the Gaussian case can be readily extended. Problems of computing optimal estimation, filtering, stochastic control, and team decisions in various linear systems become tractable for this class of random processes.

140 citations


Journal ArticleDOI
TL;DR: In this article, the angular correlation function of a stationary optical field is introduced, which characterizes the correlation that exists between the complex amplitudes of any two plane waves in the angular spectrum description of the statistical ensemble that represents the field.
Abstract: In the first part of this paper, the concept of the angular correlation function of a stationary optical field is introduced. This function characterizes the correlation that exists between the complex amplitudes of any two plane waves in the angular spectrum description of the statistical ensemble that represents the field. Relations between this function and the more commonly known correlation functions are derived. In particular, it is shown that the angular correlation function is essentially the four-dimensional spatial Fourier transform of the cross-spectral density function of the source. The angular correlation function is shown to characterize completely the second-order coherence properties of the far field. An expression for the intensity distribution in the far zone of a field generated by a source of any state of coherence is deduced. Some generalizations of the far-zone form of the Van Cittert–Zernike theorem are also obtained.

105 citations


Journal ArticleDOI
TL;DR: In this article, the exact finite-sample distribution of the limited-information maximum likelihood estimator when the structural equation being estimated contains two endogenous variables and is identifiable in a complete system of linear stochastic equations is derived.
Abstract: This article is concerned with the exact finite-sample distribution of the limited-information maximum likelihood estimator when the structural equation being estimated contains two endogenous variables and is identifiable in a complete system of linear stochastic equations. The density function derived, which is represented as a doubly infinite series of a complicated form, reveals the important fact that for arbitrary values of the parameters in the model, the LIML estimator does not possess moments of order greater than or equal to one

91 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented a generalization of von Neumann's method of generating random samples from the exponential distribution by comparisons of uniform random numbers on (0, 1).
Abstract: The author presents a generalization he worked out in 1950 of von Neumann''s method of generating random samples from the exponential distribution by comparisons of uniform random numbers on (0,1). It is shown how to generate samples from any distribution whose probability density function is piecewise both absolutely continuous and monotonic on ($-\infty$,$\infty$). A special case delivers normal deviates at an average cost of only 4.036 uniform deviates each. This seems more efficient than the Center-Tail method of Dieter and Ahrens, which uses a related, but different, method of generalizing the von Neumann idea to the normal distribution.

62 citations


Journal ArticleDOI
01 Jan 1972
TL;DR: In this paper, the authors extended the stationary random envelope definition to the envelope of nonstationary random processes possessing evolutionary power spectral densities and derived the density, the joint density function, the moment function, and the crossing rate of a level of the non-stationary envelope process.
Abstract: The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.

55 citations


Journal ArticleDOI
TL;DR: In this paper, the fading characteristics of ionospheric amplitude scintillations can be described by a cumulative amplitude probability distribution function (cdf), which expresses the probability (percentage of time) that the signal amplitude will equal or exceed a given amplitude.
Abstract: The fading characteristics of ionospheric amplitude scintillations can be described by a cumulative amplitude probability distribution function (cdf). The cdf expresses the probability (percentage of time) that the signal amplitude will equal or exceed a given amplitude. Distributions of amplitude variations are made with the use of ionospheric scintillations observed on beacon signals from synchronous satellites transmitting at 136 MHz. The resulting distributions are divided into six groups corresponding to ranges of the scintillation index, the predominant measure in scintillation studies. The model distributions are then combined with the occurrence of scintillations in various index ranges to produce cumulative amplitude probability distributions. These have been done for long-term observations made at Hamilton, Massachusetts, Narssarssuaq, Greenland, and Huancayo, Peru. The results allow engineers to determine margins necessary for communication and navigation systems. Individual 15-min distributions have been compared to the theoretical distributions obtained by Nakagami [1960] in his m-distribution method of characterizing amplitude scintillation and were found to be in good agreement. The m parameter is shown to be a measure of the frequency dependence of scintillations and can be used to determine a spectral index for interpolating the amplitude distributions to other frequencies of interest.

54 citations


Journal ArticleDOI
TL;DR: In this article, the authors used the principle of maximum entropy to define tractable distributions for natural and modified rainfall populations, which is an important prerequisite for the evaluation of seeding effects by Bayesian statistics.
Abstract: This study is based on the radar-evaluated rainfall data from 52 south Florida cumulus clouds, 26 seeded and 26 control clouds, selected by a randomization procedure. The fourth root of the rainfall for both seeded and control populations was well fitted by a gamma distribution for probability density. The gamma distribution is prescribed by two parameters, one for scale and one for shape. Since the coefficient of variation of seeded and control cloud populations was the same, the shape parameters for the two gamma distributions were the same, while the seeded population's scale parameter was such as to shift the distribution to higher rainfall values than the control distribution. The best-fit gamma functions were found by application of the principle of maximum entropy. Specification of tractable distributions for natural and modified rainfall populations provides an important prerequisite for the evaluation of seeding effects by Bayesian statistics, a continuing objective in the Experimental M...

53 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of searching for a randomly moving target is considered in the case where the probability density function of the location of the target satisfies an equation of type (1), and a necessary condition for the optimality of the search density function is derived.
Abstract: The problem involved in the search for a randomly moving target is considered in the case where the probability density function of the location of the target satisfies an equation of type (1). The searching effort is expressed in terms of a time-dependent search density function, and a necessary condition for the optimality of the search density function is derived. In the case of a stationary target this condition becomes the familiar one of Koopman [4]. Since most applications would involve the solution of a partial differential equation of parabolic type with appropriate initial and boundary conditions, the application of the optimality condition is difficult. Obviously a special numerical technique needs to be introduced, a task which we shall not attempt in the present paper.

51 citations


Journal ArticleDOI
TL;DR: In this article, moment approximations to the density function of the wavelength were given, i.e., the time between a randomly chosen local maximum with height u and the following minimum in a stationary Gaussian process with a given covariance function.
Abstract: We give moment approximations to the density function of the wavelength, i. e., the time between "a randomly chosen" local maximum with height u and the following minimum in a stationary Gaussian process with a given covariance function. For certain processes we give similar approximations to the distribution of the amplitude, i. e., the vertical distance between the maximum and the minimum. Numerical examples and diagrams illustrate the results.


Proceedings ArticleDOI
01 Jan 1972
TL;DR: In this paper, a method for extracting the target strength density function from the acoustic pulses reflected from the fish is described, and the results of a Monte Carlo simulation of the technique are presented.
Abstract: Fisheries biologists using acoustic stock assessment systems need a signal processor that will provide an estimate of the fish target strength density function. The estimated density function can be used by the biologist to determine the fish size distribution within the surveyed population and to Convert the output of an echo integrator into a density estimate. A method for extracting the target strength density function from the acoustic pulses reflected from the fish is described in this paper. The results of a Monte Carlo simulation of the technique are presented.

01 Jul 1972
TL;DR: In this article, a class of bilinear estimation problems involving single-degree-of-freedom rotation is formulated and resolved, and both continuous and discrete time estimation problems are considered.
Abstract: : A class of bilinear estimation problems involving single-degree-of- freedom rotation is formulated and resolved. Both continuous and discrete time estimation problems are considered. Error criteria, probability distributions, and optimal estimates on the circle are studies. An effective synthesis procedure for continuous time estimation is provided, and a generalization to estimation on arbitrary abelian Lie groups is included. An intrinsic difference between the discrete and continuous problems is discussed, and the complexity of the equations in the discrete time case is analyzed in this setting. Applications of these results to a number of practical problems including FM demodulation and frequency stability are examined.

Journal ArticleDOI
TL;DR: In this paper, a method is developed by which the input leading to the highest possible response in an interval of time can be determined for a class of non-linear systems, where the input, if deterministic, is constrained to have a known finite energy (or norm) in the interval under consideration.

Journal ArticleDOI
01 Apr 1972
TL;DR: This paper presents heuristics which were developed to automate the selection of the control parameters and provides a method for determining their value by extrapolation, thereby avoiding a great deal of computation.
Abstract: An economical technique for approximating a joint N-dimensional probability density function has been described by Sebestyen and Edie [20]. The algorithm searches for clusters of points and considers each cluster as one hyperellipsoidal cell in an N-dimensional histogram. Among the advantages of this scheme are: 1) the histogram cell descriptors-location, shape, and size-can be determined adaptively from sequentially introduced data samples of known classification and, 2) the number of cells required for a good fit can usually be held to a small number. No assumptions are required about the underlying statistical structure of the data. The algorithm requires three types of "control parameters" which critically affect its performance and are dependent upon the number of dimensions. The three factors control the birth, shape, and growth rate of the cells. Guides were presented in [20] for choosing the control parameter values. These guides functioned well for spaces of 3 dimensions or less, but did not yield usable values for spaces of greater dimensionality. This paper presents heuristics which were developed to automate the selection of the control parameters. The properties of these parameters were studied as a function of dimension. Two of the control parameters were found to be linearly related to dimension. This provides a method for determining their value by extrapolation, thereby avoiding a great deal of computation.

Journal ArticleDOI
TL;DR: In this article, the first passage time distribution for a preassigned continuous and time homogeneous Markov process described by a diffusion equation has been deeply analyzed and satisfactorily solved.
Abstract: Since the pioneering work of Siegert (1951), the problem of determining the first passage time distribution for a preassigned continuous and time homogeneous Markov process described by a diffusion equation has been deeply analyzed and satisfactorily solved. Here we discuss the “inverse problem” — of applicative interest — consisting in deciding whether a given function can be considered as the first passage time probability density function for some continuous and homogeneous Markov diffusion process. A constructive criterion is proposed, and some examples are provided. One of these leads to a singular diffusion equation representing a dynamical model for the genesis of the lognormal distribution.


Journal ArticleDOI
TL;DR: In this article, a model of isotropically interacting ν-dimensional classical spins with an infinite range potential of the molecular field type was solved, and the partition function was represented as the integral of e−βHN over an appropriate weight function, which, for given ν, is the Pearson random walk probability distribution in ν dimensions.
Abstract: A model of isotropically interacting ν‐dimensional classical spins with an infinite range potential of the molecular field‐type is solved. The partition function is represented as the integral of e−βHN over an appropriate weight function, which, for given ν, is the Pearson random walk probability distribution in ν dimensions. A molecular field‐type phase transition is obtained for all ν.

Journal ArticleDOI
TL;DR: In this article, the mean square response of a single-degree-of-freedom linear structural system to a particular type of nonstationary random forcing function is derived for rectangular pulses.
Abstract: This paper presents a procedure for calculating the mean square response of a single-degree-of-freedom linear structural system to a particular type of nonstationary random forcing function. The forcing function, herein referred to as segmented nonstationary, is a stochastic process generated by adding a series of covariance stationary zeromean random forcing functions which are each shaped by deterministic functions of time that do not overlap in the time domain. The system's mean square response is formulated in terms of the segments' time-dependent frequency response functions and the forcing functions' stationary spectral density functions. The results are specialized to consider the response to forcing functions time-modulated by rectangular pulses. Nomenclature er(t) = deterministic modulating function of rth segment fr(t) = covariance stationary forcing function of rth segment H(a)) = system frequency response function relating displacement response and excitation Ir(t, a>) = time-dependent frequency response function m = mass q(t) = oscillator displacement Q(t] = input excitation Rfrfa(tltt2) = cross-correlation function relating fr(t-^ and fs(t2) Sfrfs((o) = cross-spectral density function between forcing function /r(r)and/s(t) co0 = system undamped natural frequency C =

Journal ArticleDOI
TL;DR: The conditional Fokker Planck equation yielding the probability density of the state of a nonlinear dynamical system, conditioned on measurements over a fixed interval, is derived in a novel way.
Abstract: An equation is derived for the probability density of the state of a nonlinear dynamical system, conditioned on measurements over a fixed interval. In deriving the equation, the conditional Fokker Planck equation yielding the probability density of the filtering problem is used several times in a novel way.

Journal ArticleDOI
TL;DR: In this article, a continuous smoothing technique which is based on a smooth and continuous approximation to the prior density function is presented and results from a Monte Carlo study of the Poisson distribution are reported.
Abstract: Maritz (1966) and Lemon & Krutchkoff (1969) each describe discrete empirical Bayes smoothing techniques. These techniques essentially attempt to approximate the prior distribution function. Here a continuous smoothing technique which is based on a smooth and continuous approximation to the prior density function is presented. Results from a Monte Carlo study of the Poisson distribution are reported which show that the continuous smoothing technique has desirable small-sample properties. Some comparisons with discrete smoothing techniques are also made.

Patent
31 Jul 1972
TL;DR: In this article, a character recognition system is disclosed in which each character in a retina, defining a scanning raster, is scanned with random lines uniformly distributed over the retina, and each type of character to be recognized the system stores a probability density function (PDF) of the random line intersection lengths and/or a PDF of a random line number of intersections.
Abstract: A character recognition system is disclosed in which each character in a retina, defining a scanning raster, is scanned with random lines uniformly distributed over the retina. For each type of character to be recognized the system stores a probability density function (PDF) of the random line intersection lengths and/or a PDF of the random line number of intersections. As an unknown character is scanned, the random line intersection lengths and/or the random line number of intersections are accumulated and based on a comparison with the prestored PDFs a classification of the unknown character is performed.


Book ChapterDOI
01 Jan 1972
TL;DR: A hierarchy of probability density function estimation procedures is considered and the histogram approach will be investigated, of which the orthogonal function expansion approach is a special case.
Abstract: 1. Abstract and Summary The purpose of this paper is to consider a hierarchy of probability density function estimation procedures. The discussion will culminate in a speculation about a universal procedure. First the histogram approach will be investigated. We shall then consider the orthogonal function expansion approach, of which the histogram approach is a special case. Next in complexity is the slightly more complicated window function approach suggested by Parzen and Rosenblatt. One degree of freedom more is found in the approach of Loftsgaarden and Quesenberry in which the window size itself is allowed to be a function of the data. Finally we ask ourselves what is meant by large, small and medium size samples. How should the number of degrees of freedom of the fitting densities grow with the size of the sample? What is the notion of intrinsic complexity of the underlying true distribution?

Journal ArticleDOI
TL;DR: In this paper, a comparison was made to determine whether or not improved classification results could be obtained by use of a different form to represent the probability density functions, namely, an empirical multivariate probability density histogram of decorrelated variables.
Abstract: Multivariate normal probability density functions are usually used in classification decision (i.e., recognition) processes on multispectral scanner data. The mean and covariance matrix of the normal density function are usually estimated from a training data set. In this study, a comparison was made to determine whether or not improved classification results could be obtained by use of a different form to represent the probability density functions, namely, an empirical multivariate probability density histogram of decorrelated variables. First, tests were made of the normality of the individual subsets of data, and all were found to be non-normal at the 1% level of significance using a standard chi-square goodness-of-fit test. Operating characteristic curves then were generated to represent decisions made with each form between each given class and a uniformly distributed alternative class; a uniform distribution was chosen because the results are then least dependent on the choice of the alternative data set. It was found that the probabilities of misclassification using the two forms were approximately identical for almost every data set even though a large number of individual data points were classified differently. It was concluded that a decision rule based on the assumption of multivariate normal distributions of scanner signals performs sufficiently well, in comparison with a more accurate but more complicated rule, to warrant its continued use in recognition processing.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the sensitivity of axially compressed, long cylindrical shells with axisymmetric imperfections from a statistical point of view, and the failure probability of a loaded shell is investigated for various imperfection statistics.
Abstract: The imperfection-sensitivity of axially compressed, long cylindrical shells with axisymmetric imperfections is analysed from a statistical point of view. Koiter's deterministic results, relating buckling load to imperfection amplitude, are used as a nonlinear transfer function between the imperfection distribution and the critical load distribution. The mean critical load and its standard deviation are obtained as functions of the mean, standard deviation and skewness of the imperfection distribution. The results shed some light on the apparent erratic buckling loads obtained in experiments on axially loaded cylindrical shells. The failure probability of a loaded shell is investigated for various imperfection statistics. HE buckling of initially imperfect cylindrical shells has been the center of much research in recent years. Large discrepancies between theoretical buckling loads calculated from linearized theories and experimental buckling loads provided the initial stimulus for this field of research. Exten- sive experimental studies on cylindrical shells subjected to axial compression, for example, have shown that the actual buckling loads usually fall short of their theoretical value and are invariably accompanied by a wide scatter. A collec- tion of this experimental evidence in graphical form may be found in a paper by Hart-Smith.1 Later development of nonlinear theories for cylindrical shell buckling, which accounted for the presence of initial deviations from the perfect cylindrical shape, brought to light the acute dependence of the critical load upon the mag- nitude of the imperfections. The earliest effort in this direc- tion was by Koiter,2 who treated an axially loaded cylindrical shell with small axisymmetric imperfections. In a subse- quent work, Koiter3 solved the same problem under less restrictive assumptions regarding cylinder length and imper- fection magnitudes. This work confirmed his earlier theory as being reasonably good up to values of imperfection mag- nitude approaching the shell thickness. Some experimental evidence4 is now available which partially verifies Koiter's shell buckling theory. The difficulty that arises in this type of imperfection theory is that of the proper choice of the kind of imperfection. The application of the theory depends upon a prior knowledge of the shape as well as the magnitude of the initial shell devia- tions and both of these quantities are extremely variable from specimen to specimen. This can be readily seen if one considers these quantities to result from some arbitrary manufacturing process which, due to the very nature of the process, is subject to a great number of random variables. In addition to this, during its useful lifetime, the state of perfection (or imperfection) of the structure is dependent on a variety of external influences which are more or less random in nature. In view of these arguments, and if one considers the scatter in the experimental results mentioned earlier, it becomes clear that the practical usefulness of the nonlinear theories by Koiter and others lies in their combination with a statistical analysis of imperfections and critical loads. Statistically oriented approaches to imperfection sensitivity in structural systems have been proposed by a number of authors including Bolotin,5 Thompson,6 and Roorda.7 Amazigo8 and Fersht9 have dealt specifically with cylindrical shells. These two authors considered the initial shell deviations to be random in shape as well as magnitude and used quantities like power spectral density or rms deviation as measures of the imper- fection intensity. In the present paper, imperfection sensitivity of axially loaded cylindrical shells of infinite length is investigated statis- tically using the approach of Roorda. 7 In contrast to the work of Amazigo and Fersht, it is assumed here that the shape of the initial deviations from the ideal shell in known. The shape is assumed to be axisymmetric in character so that Koiter's theory2 relating critical load to imperfection magnitude can be used as a transfer function in the statistical analysis. The magnitude of the initial deviations is assumed to be a random quantity with a given probability density function. The probability densities specified herein are in general nonnormal with a finite skewness. The sensitivity of the mean and standard deviation of the resulting critical load (output) distribution to changes in the mean, standard deviation and skewness of the imperfection (input) distribu- tion can thus be investigated. The added constraint that, if the skewness of the input distribution function approaches zero, this function approaches the Gaussian form is introduced to allow an assessment of the assumption of a normal input distribution. In the latter portion of this work the failure probability of cylindrical shells is investigated and some reliability curves are developed as an aid to designers.

Proceedings ArticleDOI
01 Dec 1972
TL;DR: A pattern recognition system may be viewed as a decision rule which transforms measurements into class assignments and the Bayes error is the minimum achievable error, where the minimization is with respect to all decision rules.
Abstract: The key measure of performance in a pattern recognition problem is the cost of making a decision. For the special case in which the relative cost of a correct decision is zero and the relative cost of an incorrect decision is unity, this cost is equal to the probability of an incorrect decision or error. A pattern recognition system may be viewed as a decision rule which transforms measurements into class assignments. The Bayes error is the minimum achievable error, where the minimization is with respect to all decision rules. The Bayes error is a function of the prior probabilities and the probability density functions of the respective classes. Unfortunately, in many applications, the probability density functions are unknown and therefore the Bayes error is unknown.

Journal ArticleDOI
TL;DR: In this article, a Markov process on a multi-dimensional position-velocity space is used to develop an integral equation for the first passage time density function for a damped harmonic oscillator, and a one-sided position boundary is considered as the region of safe operation.

Journal ArticleDOI
TL;DR: In this paper, the probability density for interplanetary scintillation is shown to follow a log-normal distribution better than the frequently used Rice-squared distribution, but the high intensity spikes contain more high-intensity spikes than would be expected if the probability distribution for intensity were log normal.
Abstract: Observations of the probability density for interplanetary scintillation are given They are shown to follow a log-normal distribution better than the frequently used Rice-squared distribution However, observations of strong scintillation are shown to contain more high-intensity ‘spikes’ than would be expected if the probability distribution for intensity were log normal An explanation in terms of focusing by large-scale structure is given and shown to be consistent with spacecraft electron-density spectra