scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 1988"


Journal ArticleDOI
TL;DR: In this paper, an equation of state for material in stellar envelopes, subject to the limits of temperature less than about 10 to the 7th K and density less than 1 g/cu cm, is presented.
Abstract: An equation of state for material in stellar envelopes, subject to the limits of temperature less than about 10 to the 7th K and density less than about .01 g/cu cm is presented. The equation makes it possible to express free energy as the sum of several terms representing effects such as partial degeneracy of the electron, Coulomb interactions among charged particles, finite-volume, hard sphere repulsion, and van der Waals attraction. An occupation probability formalism is used to represent the effects of the plasma in establishing a finite partition function. It is shown that the use of the static screened Coulomb potential to calculate level shifts and to estimate the cutoff of the internal partition function is invalid. For most of the parameter space relevant to stellar envelopes, perturbations arising from the plasma ions are shown to be dominant in establishing the internal partition function.

614 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that if Z is normally distributed and f has k bounded derivatives, then the fastest attainable convergence rate of any nonparametric estimator of f is only (log n) −k/2.
Abstract: Suppose that the sum of two independent random variables X and Z is observed, where Z denotes measurement error and has a known distribution, and where the unknown density f of X is to be estimated. One application is the estimation of a prior density for a sequence of location parameters. A second application arises in the errors-in-variables problem for nonlinear and generalized linear models, when one attempts to model the distribution of the true but unobservable covariates. This article shows that if Z is normally distributed and f has k bounded derivatives, then the fastest attainable convergence rate of any nonparametric estimator of f is only (log n)–k/2. Therefore, deconvolution with normal errors may not be a practical proposition. Other error distributions are also treated. Stefanski—Carroll (1987a) estimators achieve the optimal rates. The results given have versions for multiplicative errors, where they imply that even optimal rates are exceptionally slow.

585 citations


Journal ArticleDOI
TL;DR: In this paper, a method by which sample fields of a multidimensional non-Gaussian homogeneous stochastic field can be generated is developed, where the method first generates Gaussian sample fields and then maps them into non -Gaussian samples with the aid of an iterative procedure.
Abstract: A method by which sample fields of a multidimensional non‐Gaussian homogeneous stochastic field can be generated is developed. The method first generates Gaussian sample fields and then maps them into non‐Gaussian sample fields with the aid of an iterative procedure. Numerical examples indicate that the procedure is very efficient and generated sample fields satisfy the target spectral density and probability distribution function accurately. The proposed method has a wide range of applicability to engineering problems involving stochastic fields where the Gaussian assumption is not appropriate.

294 citations


Journal ArticleDOI
Craig D. Poole1
TL;DR: Polarization dispersion in single-mode fiber with random polarization mode coupling is given a statistical treatment based on the recently proposed principal-states model and the variance is shown to have a linear dependence on length while the probability density function for the delay time approaches a Gaussian shape.
Abstract: Polarization dispersion in single-mode fiber with random polarization mode coupling is given a statistical treatment based on the recently proposed principal-states model. An expression for the ensemble variance of the differential delay time between the principal states of polarization is derived by using coupled-mode theory under the assumption of weak coupling. For long fiber lengths, the variance is shown to have a linear dependence on length while the probability density function for the delay time approaches a Gaussian shape.

220 citations


Journal ArticleDOI
TL;DR: A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values.
Abstract: A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given. >

206 citations


Journal ArticleDOI
TL;DR: In this article, a new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day Geomagnetic field, and it is consistent with a white source near the core-mantle boundary with Gaussian distribution.
Abstract: A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.

189 citations


Book
29 Sep 1988
TL;DR: In this paper, the authors consider two random variables X, Y with a known joint density function fx,y(.,.). Assume that in a particular experiment, the random variable Y can be measured and takes the value y. What can be said about the corresponding value of the unobservable variable X?
Abstract: Suppose we have two random variables X, Y with a known joint density function fx,y(.,.). Assume that in a particular experiment, the random variable Y can be measured and takes the value y. What can be said about the corresponding value, say x, of the unobservable variable X?

179 citations


Journal ArticleDOI
TL;DR: In this paper, a review of Bayesian parameter estimation is given, where Tarantola and Valette can be derived within classical probability theory and the Bayesian approach allows a full resolution and uncertainty analysis which is discussed in Part II of the paper.
Abstract: This paper gives a review of Bayesian parameter estimation. The Bayesian approach is fundamental and applicable to all kinds of inverse problems. Its basic formulation is probabilistic. Information from data is combined with a priori information on model parameters. The result is called the a posteriori probability density function and it is the solution to the inverse problem. In practice an estimate of the parameters is obtained by taking its maximum. Well-known estimation procedures like least-squares inversion or l 1 norm inversion result, depending on the type of noise and a priori information given. Due to the a priori information the maximum will be unique and the estimation procedures will be stable except (in theory) for the most pathological problems which are very unlikely to occur in practice. The approach of Tarantola and Valette can be derived within classical probability theory. The Bayesian approach allows a full resolution and uncertainty analysis which is discussed in Part II of the paper.

178 citations



Book ChapterDOI
01 Jan 1988
TL;DR: Allais' general theory of random choices is essentially based on the consideration of the invariant cardinal utility function and the whole distribution of cardinal utilities, and on the general preference for security in the neighbourhood of certainty when large sums are at stake as mentioned in this paper.
Abstract: Allais’ general theory of random choices is essentially based on the consideration of the invariant cardinal utility function and the whole distribution of cardinal utilities, and on the general preference for security in the neighbourhood of certainty when large sums are at stake.

152 citations


Journal ArticleDOI
TL;DR: The numerical solution proposed here is obtained by modifying the recursion and using a simple piece-wise constant approximation to the density functions, yielding a bound on the maximum error growth, and a characterization of the situations with potential for large errors.

Journal ArticleDOI
TL;DR: In this article, a general mathematical procedure for generating synthetic daily solar irradiation values is described, which is useful for simulating solar energy systems, requiring only the 12 monthly means (K t ) as input.

Journal ArticleDOI
TL;DR: In this article, the authors considered the problem of estimating the location of the mode via kernel density estimates. And they derived an optimal local asymptotic minimax risk lower bound for estimators of the model and kernel density.
Abstract: A mode of a probability density $f(t)$ is a value $\theta$ that maximizes $f$. The problem of estimating the location of the mode is considered here. Estimates of the mode are considered via kernel density estimates. Previous results on this problem have several serious drawbacks. Conditions on the underlying density $f$ are imposed globally (rather than locally in a neighborhood of $\theta$). Moreover, fixed bandwidth sequences are considered, resulting in an estimate of the location of the mode that is not scale-equivariant. In addition, an optimal choice of bandwidth depends on the underlying density, and so cannot be realized by a fixed bandwidth sequence. Here, fixed and random bandwidths are considered, while relatively weak assumptions are imposed on the underlying density. Asymptotic minimax risk lower bounds are obtained for estimators of the mode and kernel density estimates of the mode are shown to possess a certain optimal local asymptotic minimax risk property. Bootstrapping the sampling distribution of the estimates is also discussed.

Journal ArticleDOI
Peter Lenk1
TL;DR: In this paper, a generalization of the process derived from a logistic transform of a Gaussian process is proposed to model the common density of an exchangeable sequence of observations.
Abstract: This article models the common density of an exchangeable sequence of observations by a generalization of the process derived from a logistic transform of a Gaussian process. The support of the logistic normal includes all distributions that are absolutely continuous with respect to the dominating measure of the observations. The logistic-normal family is closed in the prior to posterior Bayes analysis, with the observations entering the posterior distribution through the covariance function of the Gaussian process. The covariance of the Gaussian process plays the role of a smoothing kernel. Three features of the model provide a flexible structure for computing the predictive density: (a) The mean of the Gaussian process corresponds to the prior mean of the random density: (b) The prior variance of the Gaussian process controls the influence of the data in the posterior process. As the variance increases, the predictive density has greater fidelity to the data, (c) The prior covariance of the Gau...

Book
01 Jan 1988
TL;DR: This chapter discusses Simulation Modeling, Modeling Aspect of Simulation, and the Properties of Pseudo-Random Variables.
Abstract: MODELING AND CRUDE SIMULATION Definition of Simulation Golden Rules and Principles of Simulation Modeling: Illustrative Examples and Problems The Modeling Aspect of Simulation Single-Server, Single-Input, First-In/First-Out (FIFO) Queue Multiple-Server, Single-Input Queue An Example from Statistics: The Trimmed t Statistic An Example from Engineering: Reliability of Series Systems A Military Problem: Proportional Navigation Comments on the Examples Crude (or Straightforward) Simulation and Monte Carlo Introduction: Pseudo-Random Numbers Crude Simulation Details of Crude Simulation A Worked Example: Passage of Ships Through a Mined Channel Generation of Random Permutations Uniform Pseudo-Random Variable Generation Introduction: Properties of Pseudo-Random Variables Historical Perspectives Current Algorithms Recommendations for Generators Computational Considerations The Testing of Pseudo-Random Number Generators Conclusions on Generating and Testing Pseudo-Random Number Generators SOPHISTICATED SIMULATION Descriptions and Quantifications of Univariate Samples: Numerical Summaries Introduction Sample Moments Percentiles, the Empirical Cumulative Distribution Function, and Goodness-of-Fit Tests Quantiles Descriptions and Quantifications of Univariate Samples: Graphical Summaries Introduction Numerical and Graphical Representations of the Probability Density Function Alternative Graphical Methods for Exploring Distributions Comparisons in Multifactor Simulations: Graphical and Formal Methods Introduction Graphical and Numerical Representation of Multifactor Simulation Experiments Specific Considerations for Statistical Simulation Summary and Computing Resources Assessing Variability in Univariate Samples: Sectioning, Jackknifing, and Bootstrapping Introduction Preliminaries Assessing Variability of Sample Means and Percentiles Sectioning to Assess Variability: Arbitrary Estimates from Non-Normal Samples Bias Elimination Variance Assessment with the Complete Jackknife Variance Assessment with the Bootstrap Simulation Studies of Confidence Interval Estimation Schemes Bivariate Random Variables: Definitions, Generation, and Graphical Analysis Introduction Specification and Properties of Bivariate Random Variables Numerical and Graphical Analyses for Bivariate Data The Bivariate Inverse Probability Integral Transform Ad Hoc and Model-Based Methods for Bivariate Random Variable Generation Variance Reduction Introduction Antithetic Variates: Induced Negative Correlation Control Variables Conditional Sampling Importance Sampling Stratified Sampling

Journal ArticleDOI
TL;DR: In this paper, it was shown that for the 2t-variate joint and the transition probability density function, strongly consistent estimates for the initial and the one-step transition distribution functions of the process were obtained.

Journal ArticleDOI
TL;DR: In this paper, a unified approach to the construction of confidence bands in nonparametric density estimation and regression is described, based on interpolation formulae in numerical differentiation, and their arguments generate a variety of bands depending on the assumptions one is prepared to make about derivatives of the unknown function.

Journal ArticleDOI
TL;DR: In this paper, the authors show that kernel smoothing is an attractive method for the nonparametric estimation of either a probability density function or the intensity function of a nonstationary Poisson process.
Abstract: Kernel smoothing is an attractive method for the nonparametric estimation of either a probability density function or the intensity function of a nonstationary Poisson process. In each case the amount of smoothing, controlled by the bandwidth, that is, smoothing parameter, is crucial to the performance of the estimator. Bandwidth selection by cross-validation has been widely studied in the context of density estimation. A bandwidth selector in the intensity estimation case has been proposed that minimizes an estimate of the mean squared error under the assumption that the data are generated by a stationary Cox process. This article shows that these two methods each select the same bandwidth, even though they are motivated in much different ways. In addition to providing further justification of each method, this equivalence of smoothing parameter selectors yields new insights for both density and intensity estimation. A benefit for intensity estimation is that this equivalence makes it clear how ...

Journal ArticleDOI
TL;DR: In this article, a unit vector random variable taking values on a k-dimensional sphere Ω with probability density function f(x) is estimated based on n independent observations X1, Xn on X. The proposed estimator is of the form f n (x) = (nh k−1 ) −1 C(h) Σ i=1 n K[ (1−x′X i ) h 2 ], x ∈ Ω, where K is a kernel function defined on R+, conditions are imposed on K and f to prove point

Proceedings ArticleDOI
15 Jun 1988
TL;DR: In this article, a numerically robust relative error method (REM) for state-space model order reduction is described, based on Desai's balanced stochastic truncation (BST) technique for which M. Green has obtained an L? relative-error bound.
Abstract: A numerically robust Relative Error Method (REM) for state-space model order reduction is described. Our algorithm is based on Desai's Balanced Stochastic Truncation (BST) technique for which M. Green has obtained an L? relative-error bound. However, unlike previous methods, our Schur method completely circumvents the numerically delicate initial step of obtaining a minimal balanced stochastic realiztion (BSR) of the the power spectrum matrix G(s)GT(-s).

Journal ArticleDOI
TL;DR: Modifications of estimators proposed by Breiman, Meisel and Purcell and Abramson, which have variable window widths, are seen to have very fast rates of convergence.
Abstract: Kernel density estimators which allow different amounts of smoothing at different locations are studied. Modifications of estimators proposed by Breiman, Meisel and Purcell (1977) and Abramson (1982a), which have variable window widths, are seen to have very fast rates of convergence. These rates have traditionally been obtained using a less natural higher order kernel, which has the disadvantage of allowing an estimator which takes on negative values.

Patent
12 Aug 1988
TL;DR: In this article, a distribution density function describing the density of the signal in a signal space assigned to a voxel of the region to be imaged is computed and then convolved with a resolution function, preferably a Gaussian function.
Abstract: An imaging technique is disclosed for enhancing the contrast of an image, in particular for enhancing the contrast between subregions of a region of interest which may have similar signal characteristics and significantly distinct physical properties. A distribution density function describing the density of the signal in a signal space assigned to a voxel of the region to be imaged is first computed. This distribution function is then convolved with a resolution function, preferably a Gaussian function. Advantageously, the variance of the Gaussian is greater and a multiple of the variance of the noise statistics of the input image. The result of the convolution of the distribution function with the resolution function defines a scale, preferably a grey scale which assigns a particular tone to a pixel of the image corresponding to the voxel of the region to be imaged. The standard deviation is preferably chosen by the user and defines the resolution of the final image in the signal space. The noise in the output image can be decreased by increasing the standard deviation of the convolving Gaussian. For large values of the variance of the Gaussian, the contrast-to-noise ratio is comparable to standard images. The resulting gray scale creates a greater contrast between areas of different volumes having similar signal characteristics. Other resolution functions can be used.

Journal ArticleDOI
TL;DR: In this article, the selection of the order of the kernel in a kernel density estimator is considered from two points of view: theoretical properties are investigated by a mean integrated squared error analysis of the problem and cross validation is proposed as a practical method of choice.
Abstract: The selection of the order, i.e., number of vanishing moments, of the kernel in a kernel density estimator is considered from two points of view. First, theoretical properties are investigated by a mean integrated squared error analysis of the problem. Second, and perhaps more importantly, cross validation is proposed as a practical method of choice, and theoretical backing for this is provided through an asymptotic optimality result.

Journal ArticleDOI
TL;DR: In this article, an integrated system of stand models has been developed in which models of different levels of resolution are related in a unified mathematical structure, and from them a set of growth and survival functions is derived to produce models structurally compatible at lower stages of resolution.

Journal ArticleDOI
TL;DR: In this paper, a general procedure is developed to obtain the exact solutions for Fokker-Planck equations in the state of statistical stationarity, which is based on the idea of splitting each drift coefficient and each diffusion coefficient in a FOKker-planck equation into two parts, associated with the circulatory and potential probability flows, respectively.
Abstract: The response of a dynamical system to Gaussian white-noise excitations may be represented by a Markov vector whose probability density is governed by the well-known Fokker-Planck equation. In this paper a general procedure is developed to obtain the exact solutions for Fokker-Planck equations in the state of statistical stationarity. The dynamical systems considered are generally oscillatory and non-linear, and the random excitations may be additive, or multiplicative, or both. The procedure is based on the idea of splitting each drift coefficient and each diffusion coefficient in a Fokker-Planck equation into two parts, associated with the circulatory and potential probability flows, respectively. In so doing two sets of equations are derived for the probability potential which is the essential ingredient required to construct the probability density of the response. The procedure also provides a natural means to identify equivalent stochastic systems which share the same probability distribution.

Journal ArticleDOI
TL;DR: The principle of maximum entropy is used to analyse a G/G/1 queue at equilibrium when the constraints involve only the first two moments of the interarrival-time and service-time distributions.
Abstract: The principle of maximum entropy is used to analyse a G/G/1 queue at equilibrium when the constraints involve only the first two moments of the interarrival-time and service-time distributions. Robust recursive relations for the queue-length distribution are determined, and a probability density function analogue is characterized. Furthermore, connections with classical queueing theory and operational analysis are established, and an overall approximation, based on the concept of ‘global’ maximum entropy, is introduced. Numerical examples provide useful information on how critically system behaviour is affected by the distributional form of the interarrival and service times, and favourable comparisons are made with diffusion and other approximations. Comments on the implication of the work to the analysis of more general queueing systems are included.

Journal ArticleDOI
TL;DR: In this paper, the authors generalized Refsdal's method to study the propagation of light rays through an inhomogeneous universe and derived the probability distribution for the linear component of the cumulative shear along light rays.
Abstract: Refsdal's (1970) method is generalized to study the propagation of light rays through an inhomogeneous universe. The probability distribution for the linear component of the cumulative shear (CS) along light rays is derived, and it is shown that the CS can be dominated by nonlinear components, espcially for light rays in empty cones. The amplification tail of the amplification probability distribution is compared with analytic results; these linear investigations are shown to underestimate the high-amplification probability and hence the importance of the amplification bias in source counts. The distribution of the ellipticity of images of infinitesimal circular sources is derived, and it is shown that this can be dominated by the nonlinear contributions to the CS.

Journal ArticleDOI
TL;DR: In this article, computer simulation results are reported for the two-point matrix probability function S/sub 2/ of two-phase random media composed of disks distributed with an arbitrary degree of impenetrability.

Journal ArticleDOI
TL;DR: In this paper, a high-resolution sampling technique was used to measure concentration fluctuations simultaneously at several points in space, where the probability distribution function was measured as a function of the detector location relative to a continuous and steady source.
Abstract: We describe a new high-resolution sampling technique which can be used to measure concentration fluctuations simultaneously at several points in space. The technique has been used to measure the probability distribution function as a function of the detector location relative to a continuous and steady source. Results are compared to previous experiments and theoretical predictions. The spectra of the concentration fluctuations are analyzed and their behaviour as a function of downwind distance from the source is described.

Journal ArticleDOI
TL;DR: In this paper, the joint droplet size and velocity distribution is derived by applying information theory to the atomization process, along with the normalization of the probability distribution function and the physical conservation laws of mass, momentum and energy.
Abstract: In this paper, the joint droplet size and velocity distribution is derived by applying information theory to the atomization process, along with the normalization of the probability distribution function and the physical conservation laws of mass, momentum and energy. The obtained distribution contains the Weber number as a variable, and agrees with experimental observations. An equation for the Sauter mean diameter (D32) is obtained which agrees with several of the expressions that have been obtained from correlations of experimental data. When the Weber number exceeds 4000, the results given by Li and Tankin (1987) are appropriate.