scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 1982"


Journal Article
TL;DR: The inverse problem may be formulated as a problem of combination of information: the experimental information about data, the a priori information about parameters, and the theoretical information, and it is shown that the general solution of the non-linear inverse problem is unique and consistent.
Abstract: We examine the general non-linear inverse problem with a nite number of parameters In order to permit the incorporation of any a priori information about parameters and any distribution of data (not only of gaussian type) we propose to formulate the problem not using single quantities (such as bounds, means, etc) but using probability density functions for data and parameters We also want our formulation to allow for the incorporation of theoretical errors, ie non-exact theoretical relationships between data and parameters (due to discretization, or incomplete theoretical knowledge); to do that in a natural way we propose to dene general theoretical relationships also as probability density functions We show then that the inverse problem may be formulated as a problem of combination of information: the experimental information about data, the a priori information about parameters, and the theoretical information With this approach, the general solution of the non-linear inverse problem is unique and consistent (solving the same problem, with the same data, but with a dierent system

926 citations


Journal ArticleDOI
TL;DR: In this paper, the complex probability function w(z) = e−z2 erfc (−iz), which is related to the Voigt spectrum line profiles, is developed.
Abstract: Methods for computing the complex probability function w(z) = e−z2 erfc (−iz), which is related to the Voigt spectrum line profiles, are developed. The basic method is a rational approximation, minimizing the relative error of the imaginary part on the real axis. It is complemented by other methods in order to increase efficiency and to overcome the inevitable failure of any rational approximation near the real axis. The procedures enable one to evaluate both real and imaginary parts of w(z) with high relative accuracy. The methods are simple, as demonstrated by a sample FORTRAN program.

442 citations


Journal Article
TL;DR: Methods for computing the complex probability function w(z) = e−z2 erfc (−iz), which is related to the Voigt spectrum line profiles, are developed and enable one to evaluate both real and imaginary parts of w(Z) with high relative accuracy.
Abstract: Methods for computing the complex probability function w(z), which is related to the Voigt spectrum line profiles, are developed. The basic method is a rational approximation, minimizing the relative error of the imaginary part on the real axis.

420 citations


Journal ArticleDOI
TL;DR: A new modeling methodology to characterize failure processes in digital computers due to hardware transients is presented, and models of common fault-tolerant redundant structures are developed using decreasing hazard function distributions.
Abstract: In this paper a new modeling methodology to characterize failure processes in digital computers due to hardware transients is presented. The basic assumption made is that system sensitivity to hardware transient errors is a function of critical resources usage. The failure rate of a given resource is approximated by a deterministic function of time, depending on the average workload of that resource, plus a Gaussian process. The probability density function of the time to failure obtained under this assumption has a decreasing hazard function, explaining why decreasing hazard function densities such as the Weibull fit experimental data so well. Data on transient errors obtained from several systems are analyzed. Statistical tests confirm the good fit between decreasing hazard distributions and actual data. Finally, models of common fault-tolerant redundant structures are developed using decreasing hazard function distributions. The analysis indicates significant differences between reliability predictions based on the exponential distribution and those based on decreasing hazard function distributions. Reliability differences of 0.2 and factors greater than 2 in Mission Time Improvement are seen in model results. System designers should be aware of these differences.

157 citations


Journal ArticleDOI
TL;DR: In this article, the effects of scatter and attenuation on single photon emission computed tomography (SPECT) images can be analyzed with the aid of sophisticated Monte Carlo simulation, which enables control of components which govern the emission and transport of radiation through the source and attenuating medium.
Abstract: The effects of scatter and attenuation on single photon emission computed tomography (SPECT) images can be analyzed with the aid of sophisticated Monte Carlo simulation. Correction procedures can be evaluated by comparing corrected images with images absent of scatter and attenuation. The simulation enables control of components which govern the emission and transport of radiation through the source and attenuating medium. The basic calculation involves sampling the probability density functions (pdf) which govern the photon transport process. First, the origin of a photon is selected by sampling. Variance reduction is applied so that a detection is "forced" and weighted by the probability of an initial direction within the acceptance angle of the collimator multiplied by the probability that the photon is not attenuated. Second, the photon history is continued by sampling for a direction. The photon is forced to interact within the attenuating medium and an appropriate weight is calculated. Variance reduction is again applied with a weight determined by the product of the probability of interaction within the attenuating medium, the probability of scatter, the probability of scattering into the acceptance angle of the collimator, and the probability that the photon reaches the detector. Finally, a new direction and energy is selected. If the new energy is below the baseline energy, the history is terminated; otherwise, the second step is repeated. Presently, the collimator's geometric efficiency is considered without septal penetration.

134 citations


Journal ArticleDOI
TL;DR: In this article, a single-degree-of-freedom system with a special type of nonlinear damping and both external and parametric white-noise excitations is considered.
Abstract: A single-degree-of-freedom system with a special type of non-linear damping and both external and parametric white-noise excitations is considered. For the special case, when the intensities of coordinates and velocity modulation satisfy a certain condition an exact analytical solution is obtained to the corresponding stationary Fokker-Planck-Kolmogorov equation yielding an expression for joint probability density of coordinate and velocity. This solution is analyzed particularly in connection with stochastic stability problem for the corresponding linear system; certain implications are illustrated for the system, which is stable with respect to probability but unstable in the mean square. The solution obtained may be used to check different approximate methods for analysis of systems with randomly varying parameters.

126 citations


Journal ArticleDOI
TL;DR: In this article, a three-parameter normal tail approximation to a non-normal distribution function is proposed, where the distribution function, the probability density function and its derivative are matched at the approximation point with the approximating function.

115 citations


Journal ArticleDOI
TL;DR: The efficiency of the feature vector is demonstrated through experimental results obtained with some natural texture data and a simpler quadratic mean classifier.

114 citations


Journal ArticleDOI
TL;DR: In this paper, the statistics of scintillation intensity on an X-band satellite downlink obtained using the orbital test satellite beacon transmissions were analyzed and the experimentally found distribution is shown to depart significantly from the expected log-normal distribution, and this is explained in terms of a Gaussian process with a time variable standard deviation from which a universal model is derived.
Abstract: Extensive experimental results are presented on the statistics of tropospheric amplitude scintillations on an X -band satellite down-link obtained using the orbital test satellite beacon transmissions. The experimentally found distribution is shown to depart significantly from the expected log-normal distribution, and this is explained in terms of a Gaussian process with a time variable standard deviation from which a universal model is derived. It has been found that on average no less than about 100 h of data are required before the probability density and cumulative probability distribution functions approach stationarity. The statistics of the scintillation intensity are also presented, and a log-normal distribution of intensity is shown to be in good agreement with observations from other experimental sites. Link budget implications are outlined together with a simple strategy for the investigation of the scintillation process at any ground station.

105 citations


Journal ArticleDOI
01 Mar 1982
TL;DR: In this paper, the authors examined the properties and applications of a point process that arises when each event of a primary Poisson process generates a random number of subsidiary events, with a given time course.
Abstract: Multiplication effects in point processes are important in a number of areas of electrical engineering and physics. We examine the properties and applications of a point process that arises when each event of a primary Poisson process generates a random number of subsidiary events, with a given time course. The multiplication factor is assumed to obey the Poisson probability law, and the dynamics of the time delay are associated with a linear filter of arbitrary impulse response function; special attention is devoted to the rectangular and exponential cases. The process turns out to be a doubly stochastic Poisson point process whose stochastic rate is shot noise; it has application in pulse, particle, and photon detection. Explicit results are obtained for the single and multifold counting statistics (distribution of the number of events registered in a fixed counting time), the time statistics (forward recurrence time and interevent probability densities), and the power spectrum (noise properties). These statistics can provide substantial insight into the underlying physical mechanisms generating the process. An example of the applicability of the model is provided by cathodoluminescence (the phenomenon responsible for the television image) where a beam of electrons (the primary process) impinges on a phosphor, generating a shower of visible photons (the secondary process). Each electron produces a random number of photons whose emission times are determined by the (possibly random) lifetime of the phosphor, so that multiplication effects and time delay both come into play. We use our formulation to obtain the forward-recurrence-time probability density for cathodoluminescence in YVO 4 :Eu3+, the excess cathodoluminescence noise in ZnS:Ag, and the counting distribution for radioiuminescence photons produced in a glass photomultiplier tube. Agreement with experimental data is very good in all cases. A variety of other applications and extensions of the model are considered.

96 citations


Journal ArticleDOI
TL;DR: In this article, the problem of estimating a probability density from observations from that density which are further contaminated by random errors is considered, and a method of estimation using spline functions is proposed.
Abstract: We consider the problem of estimating a probability density from observations from that density which are further contaminated by random errors. We propose a method of estimation using spline functions, discuss the numerical implementation of the method, and prove its consistency. The problem is motivated by the analysis of DNA content obtained by microfluorometry, and an example of such an analysis is included.

Journal ArticleDOI
TL;DR: In this article, a method for selecting the member of a collection of families of distributions that best fit a set of observations is given, which is essentially the value of the density function of a scale transformation maximal invariant.
Abstract: A method is given for selecting the member of a collection of families of distributions that best fits a set of observations. This method requires a noncensored set of observations. The families considered include the exponential, gamma, Weibull, and lognormal. A selection statistic is proposed that is essentially the value of the density function of a scale transformation maximal invariant. Some properties of the selection procedures based on these statistics are stated, and results of a simulation study are reported. A set of time-to-failure data from a textile experiment is used as an example to illustrate the procedure, which is implemented by a computer program.

Journal ArticleDOI
TL;DR: McClelland's (1979) cascade model is investigated, and it is shown that the model does not have a well-defined reaction time (RT) distribution function because it always predicts a nonzero probability that a response never occurs.
Abstract: McClelland's (1979) cascade model is investigated, and it is shown that the model does not have a well-defined reaction time (RT) distribution function because it always predicts a nonzero probability that a response never occurs. By conditioning on the event that a response does occur, RT density and distribution functions are derived, thus allowing most RT statistics to be computed directly and eliminating the need for computer simulations. Using these results, an investigation of the model revealed that (a) it predicts mean RT additivity in most cases of pure insertion or selective influence; (b) it predicts only a very small increase in standard deviations as mean RT increases; and (c) it does not mimic the distribution of discrete-stage models that have a serial stage with an exponentially distributed duration. Recently, McClelland (1979) proposed a continuous-time linear systems model of simple cognitive processes based on sequential banks of parallel integrators. This model, referred t o by McClelland as the cascade model, exhibits some potentially very interesting properties. For example, McClelland argues that under certain conditions it mimics some of the reaction time (RT) additivities characteristic o f serial discrete-stage models. Unfortunately, however, rigorous empirical testing of the model is precluded because McClelland (1979) offers no method for computing any of the RT statistics it predicts. The format of this note is as follows: I will show that the model always predicts a nonzero probability that a response never occurs, which means, for example, that it always predicts infinite mean RTs. One way to circumvent this problem is to look only at trials on which a reponse does occur. By doing this it is possible to derive an RT probability density function predicted by the cascade model. From it, virtually any desired RT statistic can be accurately computed. Some of these (e.g., means and variances) will be examined, with particular regard to how well they correspond t o known empirical results. For example, it turns out

Journal ArticleDOI
TL;DR: In this article, an easily computable predictive density is considered, which coincides with the posterior predictive density with a non-informative prior, for standard situations achievements are comparable with classical inference.
Abstract: An easily computable predictive density is considered. Although involving a plain maximum likelihood technique, it coincides with the posterior predictive density with a noninformative prior. For standard situations achievements are comparable with classical inference.

Journal ArticleDOI
TL;DR: In this paper, it is argued that since the root mean square value of these fluctuations is not small compared with the mean, new methods should be developed which take explicit account of the fluctuations.

Journal ArticleDOI
Charles P. Beetz1
TL;DR: In this article, the authors used a mixture of Weibull distributions which are adapted to the case of a bimodal distribution and found the parameters of the mixed distribution by fitting the mixed probability density to the experimental histogram using maximum likelihood methods.

Journal ArticleDOI
TL;DR: In this paper, a parameterization scheme for partial cloudiness is proposed to be used in the framework of higher-order models of the turbulent planetary boundary layer, based on the assumption that the total moisture and temperature fluctuations follow gamma probability density functions, which allow for a variable skewness factor, and therefore for different cloud layer regimes.
Abstract: This paper aims to develop and test a parameterization scheme for partial cloudiness, to be used in the framework of higher-order models of the turbulent planetary boundary layer. The proposed scheme is designed to be general enough and fairly accurate, although slightly at the expense of simplicity. It is based upon the assumption that the total moisture and temperature fluctuations follow gamma probability density functions, which allow for a variable skewness factor, and therefore for different cloud layer regimes. It is nevertheless believed that simpler parameterizations can be used in a number of ways, depending upon specific uses.

Journal ArticleDOI
TL;DR: In this article, the transport equation for the probability density function of a scalar in turbulent shear flow is analyzed and the closure based on the gradient flux model for the turbulent flux and an integral model for scalar dissipation term is put forward.
Abstract: The transport equation for the probability density function of a scalar in turbulent shear flow is analyzed and the closure based on the gradient flux model for the turbulent flux and an integral model for the scalar dissipation term is put forward. The probability density function equation is complemented by a two‐equation turbulence model. Application to several shear flows proves the capability of the closure model to determine the probability density function of passive scalars.

Journal ArticleDOI
TL;DR: In this paper, the probability density function (pdf) of Richardson number in a Gaussian internal-wave field is derived, and the pdf depends on only the rms stain in the field, which is very weakly dependent on depth if at all.
Abstract: The probability density function (pdf) of Richardson number in a Gaussian internal-wave field is derived. It is found to compare well with available data. The pdf depends on only parameter λ, the rms stain in the field, which is very weakly dependent on depth if at all. The probability Ri<0.25 is a very sensitive function of λ, which is about λ≈0.5 in the ocean. Numerical simulations of vertical profiles Ri(z) are calculated based on a set of stochastic differential equations. The statistics of the vertical distributions of regions where Ri<0.25 is investigated and a simplified mixing model based on the stochastic differential equations is derived. We conclude that shear instability is a significant factor in the dissipation of internal waves.

Journal ArticleDOI
TL;DR: In this article, an asymptotic expansion is given for a class of integrals for large values of a parameter, which corresponds with the degrees of freedom in a certain type of cumulative distribution functions.
Abstract: An asymptotic expansion is given for a class of integrals for large values of a parameter, which corresponds with the degrees of freedom in a certain type of cumulative distribution functions. The expansion is uniform with respect to a variable related to the random variable of the distribution functions. Special cases include the chi-square distribution and the F-distribution.

Journal ArticleDOI
TL;DR: In this article, a model for cyclically straining is developed based on a knowledge of inhomogeneous dislocation structures observed in cyclically strained metals, and a distribution of volumes with different internal critical flow stresses is assumed characterized by a probability density function.
Abstract: — Starting from a knowledge of inhomogeneous dislocation structures observed in cyclically strained metals, a model for cyclic straining is developed. A distribution of volumes with different internal critical flow stresses is assumed characterized by a probability density function. A generalization which includes a thermally activated component of the flow stress is derived assuming that the saturated microscopic effective stress, μes, is equal in all volumes. The relations to obtain the probability density function from experimental data are derived. The theory yields the macroscopic internal stress, σe and the macroscopic effective stress, σe, along the hysteresis loop. Experimental observations on cyclically strained metals can be explained using this statistical theory.

Journal ArticleDOI
TL;DR: In this paper, a lightness scale is derived from a theoretical estimate of the probability distribution of image intensities for natural scenes, which is a scale similar to that used in photography or by the nervous system.
Abstract: A lightness scale is derived from a theoretical estimate of the probability distribution of image intensities for natural scenes. The derived image intensity distribution considers three factors: reflectance; surface orientation and illumination; and surface texture (or roughness). The convolution of the effects of these three factors yields the theoretical probability distribution of image intensities. A useful lightness scale should be the integral of this probability density function, for then equal intervals along the scale are equally probable and carry equal information. The result is a scale similar to that used in photography or by the nervous system as its transfer function.

Journal ArticleDOI
TL;DR: In this article, a universal model for the irradiance fluctuations of an optical beam propagating through atmospheric turbulence was proposed and compared with existing measured data, which suggests that this new theoretical model is applicable under all known conditions of turbulence.
Abstract: A universal model is proposed for the irradiance fluctuations of an optical beam propagating through atmospheric turbulence. When this model was compared with existing measured data, we found good qualitative and quantitative agreement, which suggests that this new theoretical model is applicable under all known conditions of turbulence. In the regime of weak scattering, the normalized moments of the distribution are essentially the same as those predicted by the lognormal model, although they show large deviations from lognormal statistics in the saturation regime. The limiting form of the universal model for conditions of super-strong turbulence is that of the negative-exponential distribution, but, for more moderate conditions of turbulence, the form is that of an exponential times an infinite series of Laguerre polynomials. The new distribution was derived under the assumption that the field irradiance consists of two principal components, each of which has an amplitude that is m distributed.

Journal ArticleDOI
TL;DR: It is proved that pattern recognition procedures derived from orthogonal series estimates of a probability density function are Bayes risk consistent and do not lose their asymptotic properties even if the random environment is nonstationary.
Abstract: Van Ryzin and Greblicki showed that pattern recognition procedures derived from orthogonal series estimates of a probability density function are Bayes risk consistent. In this note it is proved that these procedures do not lose-under some additional conditions-their asymptotic properties even if the random environment is nonstationary.

Journal ArticleDOI
TL;DR: In this article, a class of coupled nonlinear dynamical systems subjected to stochastic excitation is considered and the exact steady-state probability density function for this class of systems can be constructed.
Abstract: In this paper a class of coupled nonlinear dynamical systems subjected to stochastic excitation is considered. It is shown how the exact steady-state probability density function for this class of systems can be constructed. The result is then applied to some classical oscillator problems.

Journal ArticleDOI
TL;DR: In this article, it was shown that the probability density function of the intensity for a monochromatic, fully developed speckle pattern changes from an exponential distribution at low turbulence levels to a K distribution as the turbulence level increases.
Abstract: It is shown that, because of the effects of the turbulent atmosphere, the probability density function of the intensity for a monochromatic, fully developed speckle pattern changes from an exponential distribution at low turbulence levels to a K distribution as the turbulence level increases. A physical model that leads to the K distribution is proposed, and the parameters of the K distribution are derived in terms of the strength of turbulence, path length, wavelength, and beam size. The work is then extended to polychromatic and partially developed speckle patterns and to speckle with a coherent background. Good agreement is obtained between the theoretical predictions and experimental measurements.

Journal ArticleDOI
TL;DR: Two assumptions commonly made by choice reaction time (RT) models are that certain experimental tasks can be found that cause an extra processing stage to be inserted into the cognitive process and that the duration of one or more processing stages is random with an exponential distribution.
Abstract: Two assumptions commonly made by choice reaction time (RT) models are (1) that certain experimental tasks can be found that cause an extra processing stage to be inserted into the cognitive process and (2) that the duration of one or more processing stages is random with an exponential distribution. Few rigorous tests of either assumption exist. This paper reviews existing tests and presents several new results that can be used to test these assumptions. First, in the case in which the duration of an inserted stage is exponentially distributed, it is shown that the observable RT density functions must always intersect at the mode of the density requiring the extra processing stage. Second, when only the first assumption (Assumption 1) is made, it is shown that the cumulative RT distribution functions and, in many cases, the hazard functions must be ordered. Finally, when only Assumption 2 is assumed, it is shown that, under fairly weak conditions, the taft of the RT density function must be exponential. The first two results are applied to data from a memory scanning experiment, and the two assumptions are found to receive tentative support

Journal ArticleDOI
TL;DR: In this article, a one-dimensional steady state probabilistic river water quality model is developed to compute the joint and marginal probability density functions of BOD and DO at any point in a river.
Abstract: A one-dimensional steady state probabilistic river water quality model is developed to compute the joint and marginal probability density functions of BOD and DO at any point in a river. The model can simultaneously consider randomness in the initial conditions, inputs, and coefficients of the water quality equations. Any empirical or known distribution can be used for the initial conditions. Randomness in each of the water quality equation inputs and coefficients is modeled as a Gaussian white noise process. The joint probability density function (pdf) of BOD and DO is determined by numerically solving the Fokker-Plank random differential equation. In addition, moment equations are developed which allow the mean and variance of BOD and DO to be calculated independently of their joint pdf. The probabilistic river water quality model is examined through sensitivity study and an application to a hypothetical river system.


Journal ArticleDOI
TL;DR: In this article, the Fokker-Planck equation is solved analytically for a one-parameter family of symmetric, attractive, nonharmonic potentials which include double-well situations.
Abstract: We solve analytically the Fokker-Planck equation for a one-parameter family of symmetric, attractive, nonharmonic potentials which include double-well situations. The exact knowledge of the eigenfunctions and eigenvalues allows us to fully discuss the transient behavior of the probability density. In particular, for the bistable potentials, we can give analytical expressions for the probability current over the working barrier and for the onset time which characterizes the transition from uni- to bimodal probability densities.