scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 1992"


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method for rejection sampling from any univariate log-concave probability density function, which is adaptive: as sampling proceeds, the rejection envelope and the squeezing function converge to the density function.
Abstract: We propose a method for rejection sampling from any univariate log‐concave probability density function. The method is adaptive: As sampling proceeds, the rejection envelope and the squeezing function converge to the density function. The rejection envelope and squeezing function are piece‐wise exponential functions, the rejection envelope touching the density at previously sampled points, and the squeezing function forming arcs between those points of contact. The technique is intended for situations where evaluation of the density is computationally expensive, in particular for applications of Gibbs sampling to Bayesian models with non‐conjugacy. We apply the technique to a Gibbs sampling analysis of monoclonal antibody reactivity.

1,538 citations


Journal ArticleDOI
TL;DR: These formulas incorporate random testing results, information about the input distribution; and prior assumptions about the probability of failure of the software and include Bayesian prior assumptions.
Abstract: Formulas for estimating the probability of failure when testing reveals no errors are introduced. These formulas incorporate random testing results, information about the input distribution; and prior assumptions about the probability of failure of the software. The formulas are not restricted to equally likely input distributions, and the probability of failure estimate can be adjusted when assumptions about the input distribution change. The formulas are based on a discrete sample space statistical model of software and include Bayesian prior assumptions. Reusable software and software in life-critical applications are particularly appropriate candidates for this type of analysis. >

294 citations


Journal ArticleDOI
Joseph Abate, Ward Whitt1
TL;DR: This paper presents a version of the Fourier-series method for numerically inverting probability generating functions with a simple algorithm with a convenient error bound from the discrete Poision summation formula.

263 citations


Journal ArticleDOI
TL;DR: In this paper, the effect of dead space on the statistics of the gain in a double-carrier-multiplication avalanche photodiode (APD) was determined using a recurrence method.
Abstract: The effect of dead space on the statistics of the gain in a double-carrier-multiplication avalanche photodiode (APD) is determined using a recurrence method. The dead space is the minimum distance that a newly generated carrier must travel in order to acquire sufficient energy to become capable of causing an impact ionization. Recurrence equations are derived for the first moment, the second moment, and the probability distribution function of two random variables that are related, in a deterministic way, to the random gain of the APD. These equations are solved numerically to produce the mean gain and the excess noise factor. The presence of dead space reduces both the mean gain and the excess noise factor of the device. This may have a beneficial effect on the performance of the detector when used in optical receivers with photon noise and circuit noise. >

251 citations


Journal ArticleDOI
TL;DR: In this paper, a general framework to compute the statistical moments of q and of the associated total solute discharge Q and mass M is established, and the expected expected solute flux value is proportional to the joint probability density function (pdf) g1 of η, ζ and τ, whereas the variance of q is shown to depend on the joint pdf g2 of the same variables for two particles.
Abstract: It is common to represent solute tranport in heterogeneous formations in terms of the resident concentration C(x, t), regarded as a random space function. The present study investigates the alternative representation by q, the solute mass flux at a point of a control plane normal to the mean flow. This representation is appropriate for many field applications in which the variable of interest is the mass of solute discharged through a control surface. A general framework to compute the statistical moments of q and of the associated total solute discharge Q and mass M is established. With x the direction of the mean flow, a solute particle is crossing the control plane at y = η, z = ζ and at the travel (arrival) time τ. The associated expected solute flux value is proportional to the joint probability density function (pdf) g1 of η, ζ and τ, whereas the variance of q is shown to depend on the joint pdf g2 of the same variables for two particles. In turn, the statistical moments of η, ζ and τ depend on those of the velocity components through a system of stochastic ordinary differential equations. For a steady velocity field and neglecting the effect of pore-scale dispersion, a major simplification of the problem results in the independence of the random variables η, ζ and τ. As a consequence, the pdf of η and ζ can be derived independently of τ. A few approximate approaches to derive the statistical moments of η, ζ and τ are outlined. These methods will be explored in paper 2 in order to effectively derive the variances of the total solute discharge and mass, while paper 3 will deal with the nonlinear effect of the velocity variance upon the moments of η, ζ and τ

219 citations


Journal ArticleDOI
TL;DR: In this article, a differential equation for diffusion in isotropic and homogeneous fractal structures is derived within the context of fractional calculus, which generalizes the fractional diffusion equation valid in Euclidean systems.
Abstract: A differential equation for diffusion in isotropic and homogeneous fractal structures is derived within the context of fractional calculus. It generalizes the fractional diffusion equation valid in Euclidean systems. The asymptotic behavior of the probability density function is obtained exactly and coincides with the accepted asymptotic form obtained using scaling argument and exact enumeration calculations on large percolation clusters at criticality. The asymptotic frequency dependence of the scattering function is derived exactly from the present approach, which can be studied by X-ray and neutron scattering experiments on fractals.

196 citations


Journal ArticleDOI
TL;DR: In this paper, a fully automatic method, which involves the maximum likelihood method and may involve stepwise knot deletion and either the AIC or Bayesian information criterion (BIC), is used to determine the estimate.
Abstract: Logspline density estimation is developed for data that may be right censored, left censored, or interval censored. A fully automatic method, which involves the maximum likelihood method and may involve stepwise knot deletion and either the Akaike information criterion (AIC) or Bayesian information criterion (BIC), is used to determine the estimate. In solving the maximum likelihood equations, the Newton–Raphson method is augmented by occasional searches in the direction of steepest ascent. Also, a user interface based on S is described for obtaining estimates of the density function, distribution function, and quantile function and for generating a random sample from the fitted distribution.

168 citations


Journal ArticleDOI
TL;DR: In this paper, the authors argue that the most appropriate form for urban population density models is the inverse power function, contrary to conventional practice, which is largely based upon the negative exponential.
Abstract: In this paper, we argue that the most appropriate form for urban population density models is the inverse power function, contrary to conventional practice, which is largely based upon the negative exponential. We first show that the inverse power function has several theoretical properties which have hitherto gone unremarked in the literature. Our main argument, however, is based on the notion that a density function should describe the extent to which the space available for urban development is filled. To this end, we introduce ideas from urban allometry and fractal geometry to demonstrate that the inverse power model is the only function which embodies the fractal property of self-similarity which we consider to be a basic characteristic of urban form and density. In short, we show that the distance parameter a of the inverse power model is a measure of the extent to which space is filled, and that its value is determined by the basic relation D+α=2 where D is the fractal dimension of the city in ques...

160 citations


Journal ArticleDOI
TL;DR: Simulators of construction operations often must approximate the underlying distribution of a random process using a standard statistical distribution eg, lognormal, normal, and beta In many of as discussed by the authors.
Abstract: Simulators of construction operations often must approximate the underlying distribution of a random process using a standard statistical distribution eg, lognormal, normal, and beta In many of

151 citations


Journal ArticleDOI
TL;DR: In this paper, a model for the probability distribution of the rainflow stress range based on a mixed-distribution Weibull model whose parameters can be evaluated from only two spectral properties, namely the irregularity factor I and a bandwidth parameter β 0.75, is presented.

151 citations


Book ChapterDOI
TL;DR: A new method for the detection and measurement of a periodic signal in a data set when the authors have no prior knowledge of the existence of such a signal or of its characteristics is presented, applicable to data consisting of the locations or times of individual events.
Abstract: We present a new method for the detection and measurement of a periodic signal in a data set when we have no prior knowledge of the existence of such a signal or of its characteristics. It is applicable to data consisting of the locations or times of individual events. To address the detection problem, we use Bayes’ theorem to compare a constant rate model for the signal to models with periodic structure. The periodic models describe the signal plus background rate as a stepwise distribution in m bins per period, for various values of m. The Bayesian posterior probability for a periodic model contains a term which quantifies Ockham’s razor, penalizing successively more complicated periodic models for their greater complexity even though they are assigned equal prior probabilities. The calculation thus balances model simplicity with goodness-of-fit, allowing us to determine both whether there is evidence for a periodic signal, and the optimum number of bins for describing the structure in the data. Unlike the results of traditional “frequentist” calculations, the outcome of the Bayesian calculation does not depend on the number of periods examined, but only on the range examined. Once a signal is detected, we again use Bayes’ theorem to estimate the frequency of the signal. The probability density for the frequency is inversely proportional to the multiplicity of the binned events and is thus maximized for the frequency leading to the binned event distribution with minimum combinatorial entropy. The method is capable of handling gaps in the data due to intermittent observing or dead time.

Journal ArticleDOI
TL;DR: Stretched exponentials provide good working approximations to the tails of the PDF and theoretical forms based on multifractal notions of turbulence agree well with the measured PDFs.
Abstract: Measurements have been made of the probability density function (PDF) of velocity increments \ensuremath{\Delta}u(r) for a wide range of separation distances r. Stretched exponentials provide good working approximations to the tails of the PDF. The stretching exponent varies monotonically from 0.5 for r in the dissipation range to 2 for r in the integral scale range. Theoretical forms based on multifractal notions of turbulence agree well with the measured PDFs. When the largest scales in the velocity u are filtered out, the PDF of \ensuremath{\Delta}u(r) becomes symmetric and, for large r, close to exponential.

Journal ArticleDOI
TL;DR: In this article, the validity of posterior probability statements follows from probability calculus when the likelihood is the density of the observations, and a more intuitive definition of validity is introduced, based on coverage of posterior sets.
Abstract: SUMMARY The validity of posterior probability statements follows from probability calculus when the likelihood is the density of the observations. To investigate other cases, a second, more intuitive definition of validity is introduced, based on coverage of posterior sets. This notion of validity suggests that the likelihood must be the density of a statistic, not necessarily sufficient, for posterior probability statements to be valid. A convenient numerical method is proposed to invalidate the use of certain likelihoods for Bayesian analysis. Integrated, marginal, and conditional likelihoods, derived to avoid nuisance parameters, are also discussed.

Book ChapterDOI
01 Jan 1992
TL;DR: An improved version of a self-organizing network model which has been proposed at the ICANN-91 and since then has been applied to various problems is described, with the generalization of the model to arbitrary dimension and the introduction of a local estimate of the probability density.
Abstract: In this paper an improved version of a self-organizing network model is described which has been proposed at the ICANN-91[3] and since then has been applied to various problems [1,2,5]. The improvements presented here are the generalization of the model to arbitrary dimension and the introduction of a local estimate of the probability density. The latter leads to a very clear distinction between necessary and superfluous neurons with respect to modeling a given probability distribution. This makes it possible to automatically generate network structures that are nearly optimally suited for the distribution at hand.

Journal ArticleDOI
TL;DR: An asymptotic formula is given for the expected number of modes of a kernel density estimator, and this establishes the rate of convergence of the critically smoothed bandwidth.
Abstract: A test due to B.W. Silverman for modality of a probability density is based on counting modes of a kernel density estimator, and the idea of critical smoothing. An asymptotic formula is given for the expected number of modes. This, together with other methods, establishes the rate of convergence of the critically smoothed bandwidth. These ideas are extended to provide insight concerning the behaviour of the test based on bootstrap critical values.

Journal ArticleDOI
TL;DR: In this article, the statistical properties of the impulse response function of double-carrier multiplication avalanche photodiodes (APDs) are determined, including the effect of dead space, i.e., the minimum distance that a newly generated carrier must travel in order to acquire sufficient energy to become capable of causing an impact ionization.
Abstract: The statistical properties of the impulse response function of double-carrier multiplication avalanche photodiodes (APDs) are determined, including the effect of dead space, i.e., the minimum distance that a newly generated carrier must travel in order to acquire sufficient energy to become capable of causing an impact ionization. Recurrence equations are derived for the first and second moments and the probability distribution function of a set of random variables that are related, in a deterministic way, to the random impulse response function of the APD. The equations are solved numerically to produce the mean impulse response, the standard deviation, and the signal-to-noise ratio (SNR), all as functions of time. >

Journal ArticleDOI
TL;DR: In this article, the authors examined the impact of data, namely measured transmissivities, upon the cumulative probability density function G(τ) and reduction of uncertainty for two-dimensional steady flow of average uniform head gradient.
Abstract: Solute transport through heterogeneous formations is modeled by the travel time approach, i.e., the time τ it takes a solute particle to travel from the source to a control plane. Due to uncertainty τ is a random variable characterized by its cumulative probability density function G(τ). This function is used by regulatory agencies in pollution site assessment. The main aim of the present study is to examine the impact of data, namely measured transmissivities, upon G(τ) and reduction of uncertainty. This is achieved for two-dimensional steady flow of average uniform head gradient by using a Lagrangian approach developed in the past (Dagan, 1982, 1984, 1989; Rubin, 1990, 1991). The impact of data is seen in conditioning on measured values and in estimation of parameters characterizing flow and formation properties. The approach is illustrated by a simulation based on synthetic data. The results show how uncertainty is reduced by increasing the density of measurements for a control plane sufficiently far from the source.

Journal ArticleDOI
TL;DR: Locally optimum (LO) distributed detection is considered for observations that are dependent from sensor to sensor and solutions indicate that the LO sensor detector nonlinearities, in general, contain a term proportional to f'/f, where f is the noise probability density function.
Abstract: Locally optimum (LO) distributed detection is considered for observations that are dependent from sensor to sensor. The necessary conditions are presented for LO distributed sensor detector designs. and a locally optimum fusion rule for an N-sensor parallel distributed detection system with dependent sensor observations is given. Specific solutions are obtained for a random signal additive noise detection problem with two sensors. These solutions indicate that the LO sensor detector nonlinearities, in general, contain a term proportional to f'/f, where f is the noise probability density function (pdf). For some non-Gaussian pdf's, the new term is significant and causes the LO sensor detector nonlinearities to be nonsymmetric even for symmetric pdfs. LO solutions are presented for finite sample sizes, and the solutions for the asymptotic case are discussed. These results are extended to yield the form of the solutions for the N-sensor LO random signal distributed detection problem that generalize the two-sensor results. >

Journal ArticleDOI
TL;DR: In this article, the probability density of trajectory separation satisfies a scaling law whose exponent is determined by the spectrum of local Lyapunov exponents, and it is shown that trajectory separation in a system of identical noisy mappings satisfies this scaling law.

Journal ArticleDOI
01 Jun 1992
TL;DR: In this paper, a model for a coherent K distributed clutter by considering a coherent multidimensional Gaussian probability density function where its standard deviation is not known a priori but is itself a random variable with a Gamma distribution is presented.
Abstract: The paper describes a model for a coherent K distributed clutter by considering a coherent multidimensional Gaussian probability density function where its standard deviation is not known a priori but is itself a random variable with a Gamma distribution. Subsequently, the problems of detecting an a priori known target and a Swerling zero target model embedded in a coherent K distributed clutter are considered. The corresponding detectors based on the likelihood ratio test are evaluated. A mixed numerical and simulation procedure is set up to evaluate the receiver operating characteristics of the processor. The detection probability versus the signal-to-clutter power ratio, having as parameters the probability of false alarm and the clutter skewness, is evaluated for a number of processed pulses.

Journal ArticleDOI
TL;DR: The results of this study indicate that the statistics of the coefficients are best approximated by a Laplacian pdf (probability density function) for the statistical properties of the normalized coefficients.
Abstract: Block-matching motion compensation techniques are widely used in image coding algorithms. A differential signal with different characteristics from the original signal is then generated. It is important to know the statistical properties of the signal source in order to correctly characterize some parameters of the digital image coding scheme. In this paper a statistical study of the 2D-DCT coefficients of the motion-compensated blocks is performed. The results of this study indicate that the statistics of the coefficients are best approximated by a Laplacian pdf (probability density function). The influence of some types of normalization is investigated and the corresponding pdf's are estimated. An analysis indicates that the Laplacian pdf may be used as a good approximation for the statistical properties of the normalized coefficients.

Journal ArticleDOI
TL;DR: In this article, the weighted residuals method is applied to the reduced Fokker-planck equation associated with a non-linear oscillator, which is subjected to both additive and multiplicative Gaussian white noise excitations.
Abstract: The method of weighted residuals is applied to the reduced Fokker-Planck equation associated with a non-linear oscillator, which is subjected to both additive and multiplicative Gaussian white noise excitations. A set of constraints are deduced for obtaining an approximate stationary probability density for the system response. One of the constraints coincides with the previously proposed criterion of dissipation energy balancing, and the others are useful for calculating the equivalent conservative force. It is shown that these constraints imply certain relationships among certain statistical moments; their imposition guarantees that such moments computed from the approximate probability density satisfy the corresponding exact equations derived from the original equation of motion. Moreover, the well-known procedure of stochastic linearization and its improved version of partial linearization are shown to be special cases of this scheme, and they are less accurate since the approximations are not chosen from the entire set of the solution pool of generalized stationary potential. Applications of the scheme are illustrated by examples, and its accuracy is substantiated by Monte Carlo simulation results.

Journal ArticleDOI
TL;DR: A number of techniques have been developed to compute the velocity distribution and, hence, the momentum and energy coefficients along a non-uniform open-channel flow as mentioned in this paper, which has made it possible to determine the cross-sectional mean velocity and the momentum coefficients without having to deal with the geometrical shape of cross sections.
Abstract: A number of techniques have been developed to compute the velocity distribution and, hence, the momentum and energy coefficients along a nonuniform open-channel flow. Analysis of velocity distribution in the probability domain has made it possible to determine the cross-sectional mean velocity and the momentum and energy coefficients without having to deal with the geometrical shape of cross sections, which tend to be extremely complex in natural channels. In the probability space, the area mean values can be obtained as the mathematical expectations that can be determined from the probability density function of velocity derived by entropy maximization. The capability of a velocity distribution, derived from the probability and entropy concepts, to describe various possible patterns of velocity distribution along a nonuniform flow has been demonstrated. Simple graphical methods to estimate a model parameter have also been developed for practical applications in study of transport processes in open channels, natural or man-made, that are related to the velocity distribution.

Journal ArticleDOI
TL;DR: In this article, a method of generating an interpolating stochastic process for simulation of conditional random fields involving deterministic time functions is presented, which is to be used as a major tool for stochastically interpolation of earthquake ground motions.

Journal ArticleDOI
TL;DR: Approximate entropy is seen as the information-theoretic rate of entropy for approximating Markov chains and is suggested as a parameter for turbulence; a discontinuity in the Kolmogorov-Sinai entropy implies that in the physical world, some measure of coarse graining in a mixing parameter is required.
Abstract: A common framework of finite state approximating Markov chains is developed for discrete time deterministic and stochastic processes Two types of approximating chains are introduced: (i) those based on stationary conditional probabilities (time averaging) and (ii) transient, based on the percentage of the Lebesgue measure of the image of cells intersecting any given cell For general dynamical systems, stationary measures for both approximating chains converge weakly to stationary measures for the true process as partition width converges to 0 From governing equations, transient chains and resultant approximations of all n-time unit probabilities can be computed analytically, despite typically singular true-process stationary measures (no density function) Transition probabilities between cells account explicitly for correlation between successive time increments For dynamical systems defined by uniformly convergent maps on a compact set (eg, logistic, Henon maps), there also is weak continuity with a control parameter Thus all moments are continuous with parameter change, across bifurcations and chaotic regimes Approximate entropy is seen as the information-theoretic rate of entropy for approximating Markov chains and is suggested as a parameter for turbulence; a discontinuity in the Kolmogorov-Sinai entropy implies that in the physical world, some measure of coarse graining in a mixing parameter is required

Book ChapterDOI
01 Jan 1992
TL;DR: The paper presents a method for calculating the response statistics of nonlinear dynamic systems excited by a white noise or filtered white noise process based on the path integral solution technique, which is a viable alternative to the direct numerical solution of the FPK equation.
Abstract: The paper presents a method for calculating the response statistics of nonlinear dynamic systems excited by a white noise or filtered white noise process. The method, which is based on the path integral solution technique, is still under development, but experience so far indicates that it is singularly well suited for numerical calculation of the response statistics of nonlinear systems to which can be associated a Markov vector process whose probability density satisfies a Fokker-Planck-Kolmogorov (FPK) equation. The method is a viable alternative to the direct numerical solution of the FPK equation. A key feature of the method is the possibility of obtaining highly accurate solutions at very low probability levels. Also, there seems to be almost no limitation on the type of nonlinearity that can be accomodated. On the negative side there are clear limitations of the method concerning required computer resources.

Book
03 Jan 1992
TL;DR: The authors develop the mathematical tools for the computation of the average error due to quantization and derive the analytic expression for the probability density of error distribution of a function of an arbitrarily large number of independently quantized variables.
Abstract: The authors develop the mathematical tools for the computation of the average error due to quantization. They can be used in estimating the actual error occurring in the implementation of a method. Also derived is the analytic expression for the probability density of error distribution of a function of an arbitrarily large number of independently quantized variables. The probability of the error of the function to be within a given range can thus be obtained accurately. In analyzing the applicability of an approach; it is necessary to determine whether the approach is capable of withstanding the quantization error. If it is not, then regardless of the accuracy with which the experiments are carried out, the approach will yield unacceptable results. The tools developed can be used in the analysis of the applicability of a given algorithm, hence revealing the intrinsic limitations of the approach. >

Journal ArticleDOI
Lanh Tat Tran1
TL;DR: In this paper, uniform strong consistency of kernel density estimators of ƒ was established, and their rates of convergence were obtained, and the estimators can achieve the rate of convergence (n −1 log n) 1 3 in L∞ norm restricted to compacts under weak conditions.

Journal ArticleDOI
TL;DR: Three-dimensional intensity information can be used to generate a set of projections from which it is possible to reconstruct the second-order statistics of the partially coherent wave field.
Abstract: The Wigner distribution of the electric fields in a quasimonochromatic light wave is equivalent, in the paraxial approximation, to the cross-spectral density function of the wave. The intensity distribution in a plane may be described as a projection across this Wigner distribution. Three-dimensional intensity information can be used to generate a set of projections from which it is possible to reconstruct the second-order statistics of the partially coherent wave field.

Patent
31 Dec 1992
TL;DR: In this article, a speech recognition memory compression method and apparatus subpartitions probability density function (pdf) space along the hidden Markov model (HMM) index into packets of typically 4 to 8 log-pdf values.
Abstract: A speech recognition memory compression method and apparatus subpartitions probability density function (pdf) space along the hidden Markov model (HMM) index into packets of typically 4 to 8 log-pdf values. Vector quantization techniques are applied using a logarithmic distance metric and a probability weighted logarithmic probability space for the splitting of clusters. Experimental results indicate a significant reduction in memory can be obtained with little increase in overall speech recognition error.