scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 2012"


Journal ArticleDOI
TL;DR: The basic idea is to perform nonlinear logistic regression to discriminate between the observed data and some artificially generated noise and it is shown that the new method strikes a competitive trade-off in comparison to other estimation methods for unnormalized models.
Abstract: We consider the task of estimating, from observed data, a probabilistic model that is parameterized by a finite number of parameters. In particular, we are considering the situation where the model probability density function is unnormalized. That is, the model is only specified up to the partition function. The partition function normalizes a model so that it integrates to one for any choice of the parameters. However, it is often impossible to obtain it in closed form. Gibbs distributions, Markov and multi-layer networks are examples of models where analytical normalization is often impossible. Maximum likelihood estimation can then not be used without resorting to numerical approximations which are often computationally expensive. We propose here a new objective function for the estimation of both normalized and unnormalized models. The basic idea is to perform nonlinear logistic regression to discriminate between the observed data and some artificially generated noise. With this approach, the normalizing partition function can be estimated like any other parameter. We prove that the new estimation method leads to a consistent (convergent) estimator of the parameters. For large noise sample sizes, the new estimator is furthermore shown to behave like the maximum likelihood estimator. In the estimation of unnormalized models, there is a trade-off between statistical and computational performance. We show that the new method strikes a competitive trade-off in comparison to other estimation methods for unnormalized models. As an application to real data, we estimate novel two-layer models of natural image statistics with spline nonlinearities.

695 citations


Journal ArticleDOI
TL;DR: This work addresses the solution of large-scale statistical inverse problems in the framework of Bayesian inference with a so-called Stochastic Monte Carlo method.
Abstract: We address the solution of large-scale statistical inverse problems in the framework of Bayesian inference. The Markov chain Monte Carlo (MCMC) method is the most popular approach for sampling the posterior probability distribution that describes the solution of the statistical inverse problem. MCMC methods face two central difficulties when applied to large-scale inverse problems: first, the forward models (typically in the form of partial differential equations) that map uncertain parameters to observable quantities make the evaluation of the probability density at any point in parameter space very expensive; and second, the high-dimensional parameter spaces that arise upon discretization of infinite-dimensional parameter fields make the exploration of the probability density function prohibitive. The challenge for MCMC methods is to construct proposal functions that simultaneously provide a good approximation of the target density while being inexpensive to manipulate. Here we present a so-called Stoch...

411 citations


Journal ArticleDOI
TL;DR: The key idea is to align the complexity level and order of analysis with the reliability and detail level of statistical information on the input parameters to avoid the necessity to assign parametric probability distributions that are not sufficiently supported by limited available data.

350 citations


Journal ArticleDOI
TL;DR: In this article, the effects of self-gravity and magnetic fields on supersonic turbulence in isothermal molecular clouds with high-resolution simulations and adaptive mesh refinement were examined, and it was shown that gravity splits the clouds into two populations, one low-density turbulent state and one high-density collapsing state.
Abstract: We examine the effects of self-gravity and magnetic fields on supersonic turbulence in isothermal molecular clouds with high-resolution simulations and adaptive mesh refinement. These simulations use large root grids (5123) to capture turbulence and four levels of refinement to follow the collapse to high densities, for an effective resolution of 81923. Three Mach 9 simulations are performed, two super-Alfv?nic and one trans-Alfv?nic. We find that gravity splits the clouds into two populations, one low-density turbulent state and one high-density collapsing state. The low-density state exhibits properties similar to non-self-gravitating in this regime, and we examine the effects of varied magnetic field strength on statistical properties: the density probability distribution function is approximately lognormal, the velocity power spectral slopes decrease with decreasing mean field strength, the alignment between velocity and magnetic field increases with the field, and the magnetic field probability distribution can be fitted to a stretched exponential. The high-density state is well characterized by self-similar spheres: the density probability distribution is a power law, collapse rate decreases with increasing mean field, density power spectra have positive slopes, P(?, k)k, thermal-to-magnetic pressure ratios are roughly unity for all mean field strengths, dynamic-to-magnetic pressure ratios are larger than unity for all mean field strengths, the magnetic field distribution follows a power-law distribution. The high Alfv?n Mach numbers in collapsing regions explain the recent observations of magnetic influence decreasing with density. We also find that the high-density state is typically found in filaments formed by converging flows, consistent with recent Herschel observations. Possible modifications to existing star formation theories are explored. The overall trans-Alfv?nic nature of star-forming clouds is discussed.

164 citations


Journal ArticleDOI
TL;DR: In this article, the authors applied an analytical method that combined the cumulant method with the Cornish-Fisher expansion to solve the voltage regulation problem in photovoltaic distributed generation.

162 citations


Journal ArticleDOI
TL;DR: It is shown how the proposed EW distribution offers an excellent fit to simulation and experimental data under all aperture averaging conditions, under weak and moderate turbulence conditions, as well as for point-like apertures.
Abstract: Nowadays, the search for a distribution capable of modeling the probability density function (PDF) of irradiance data under all conditions of atmospheric turbulence in the presence of aperture averaging still continues. Here, a family of PDFs alternative to the widely accepted Log-Normal and Gamma-Gamma distributions is proposed to model the PDF of the received optical power in free-space optical communications, namely, the Weibull and the exponentiated Weibull (EW) distribution. Particularly, it is shown how the proposed EW distribution offers an excellent fit to simulation and experimental data under all aperture averaging conditions, under weak and moderate turbulence conditions, as well as for point-like apertures. Another very attractive property of these distributions is the simple closed form expression of their respective PDF and cumulative distribution function.

152 citations


Journal ArticleDOI
TL;DR: In this paper, the generalized density evolution equation (GDEE) is derived for nonlinear stochastic systems, which is a unified basis for the probability density evolution equations holding for different types of systems.

145 citations


Journal ArticleDOI
TL;DR: An upper bound on the truncation error is derived and this is used to present an adaptive computational approach that selects the minimum number of terms required for accuracy in the complex Double Gaussian distribution.
Abstract: In this paper, we derive the joint (amplitude, phase) distribution of the product of two independent non-zero-mean Complex Gaussian random variables. We call this new distribution the complex Double Gaussian distribution. This probability distribution function (PDF) is a doubly infinite summation over modified Bessel functions of the first and second kind. We analyze the behavior of this sum and show that the number of terms needed for accuracy is dependent upon the Rician k-factors of the two input variables. We derive an upper bound on the truncation error and use this to present an adaptive computational approach that selects the minimum number of terms required for accuracy. We also present the PDF for the special case where either one or both of the input complex Gaussian random variables is zero-mean. We demonstrate the relevance of our results by deriving the optimal Neyman-Pearson detector for a time reversal detection scheme and computing the receiver operating characteristics through Monte Carlo simulations, and by computing the symbol error probability (SEP) for a single-channel M-ary phase-shift-keying (M-PSK) communication system.

144 citations


Journal ArticleDOI
TL;DR: In this article, the quality of wind speed assessment depends on the capability of chosen probability density function (PDF) to describe the measured wind speed frequency distribution, which is critical for harnessing wind power effectively.
Abstract: Accurate wind speed modeling is critical in estimating wind energy potential for harnessing wind power effectively. The quality of wind speed assessment depends on the capability of chosen probability density function (PDF) to describe the measured wind speed frequency distribution. The objective of this study is to describe (model) wind speed characteristics using three mixture probability density functions Weibull-extreme value distribution (GEV), Weibull-lognormal, and GEV-lognormal which were not tried before. Statistical parameters such as maximum error in the Kolmogorov-Smirnov test, root mean square error, Chi-square error, coefficient of determination, and power density error are considered as judgment criteria to assess the fitness of the probability density functions. Results indicate that Weibull-GEV PDF is able to describe unimodal as well as bimodal wind distributions accurately whereas GEV-lognormal PDF is able to describe familiar bell-shaped unimodal distribution well. Results show that mixture probability functions are better alternatives to conventional Weibull, two-component mixture Weibull, gamma, and lognormal PDFs to describe wind speed characteristics.

141 citations


Journal ArticleDOI
TL;DR: Analysis of the decode-and-forward (DF) protocol in the free space optical (FSO) links following the Gamma-Gamma distribution and average bit error rate of the DF relaying is obtained.
Abstract: We analyze performance of the decode-and-forward (DF) protocol in the free space optical (FSO) links following the Gamma-Gamma distribution. The cumulative distribution function (cdf) and probability density function (pdf) of a random variable containing mixture of the Gamma-Gamma and Gaussian random variables is derived. By using the derived cdf and pdf, average bit error rate of the DF relaying is obtained.

116 citations


Book ChapterDOI
David Scott1
TL;DR: This chapter examines the use of flexible methods to approximate an unknown density function, and techniques appropriate for visualization of densities in up to four dimensions, as well as descriptions of the visualization of multivariate data and density estimates.
Abstract: This chapter examines the use of flexible methods to approximate an unknown density function, and techniques appropriate for visualization of densities in up to four dimensions. The statistical analysis of data is a multilayered endeavor. Data must be carefully examined and cleaned to avoid spurious findings.

Journal ArticleDOI
TL;DR: In this paper, a stochastic model for magnetized plasmas is presented, with the plasma density given by a random sequence of bursts with a fixed wave form, which predicts a parabolic relation between the skewness and kurtosis moments of the plasma fluctuations.
Abstract: Single-point measurements of fluctuations in the scrape-off layer of magnetized plasmas are generally found to be dominated by large-amplitude bursts which are associated with radial motion of bloblike structures. A stochastic model for these fluctuations is presented, with the plasma density given by a random sequence of bursts with a fixed wave form. When the burst events occur in accordance to a Poisson process, this model predicts a parabolic relation between the skewness and kurtosis moments of the plasma fluctuations. In the case of an exponential wave form and exponentially distributed burst amplitudes, the probability density function for the fluctuation amplitudes is shown to be a Gamma distribution with the scale parameter given by the average burst amplitude, and the shape parameter given by the ratio of the burst duration and waiting times.

Journal ArticleDOI
TL;DR: The performance of two-way amplify-and-forward (AF) relaying networks, considering transmissions over independent but not necessarily identically distributed Rayleigh fading channels in the presence of a finite number of co-channel interferers, is studied.
Abstract: The performance of two-way amplify-and-forward (AF) relaying networks, considering transmissions over independent but not necessarily identically distributed Rayleigh fading channels in the presence of a finite number of co-channel interferers, is studied. Specifically, closed-form expressions for the cumulative distribution function (CDF) of the equivalent signal-to-interference-plus-noise ratio (SINR), the error probability, the outage probability and the system's achievable rate, are presented. Furthermore, an asymptotic expression for the probability density function (PDF) of the equivalent instantaneous SINR is derived, based on which simple and general asymptotic formulas for the error and outage probabilities are derived and analyzed. Numerical results are also provided, sustained by simulations which corroborate the exactness of the theoretical analysis.

Journal ArticleDOI
TL;DR: A heavy-tailed CG model with an inverse Gaussian texture distribution is proposed and its distributional properties such as closed-form expressions for its probability density function (p.d.f.) as well as its amplitude p.
Abstract: The compound-Gaussian (CG) distributions have been successfully used for modelling the non-Gaussian clutter measured by high-resolution radars. Within the CG class, the complex K -distribution and the complex t-distribution have been used for modelling sea clutter which is often heavy-tailed or spiky in nature. In this paper, a heavy-tailed CG model with an inverse Gaussian texture distribution is proposed and its distributional properties such as closed-form expressions for its probability density function (p.d.f.) as well as its amplitude p.d.f., amplitude cumulative distribution function and its kurtosis parameter are derived. Experimental validation of its usefulness for modelling measured real-world radar lake-clutter is provided where it is shown to yield better fits than its widely used competitors.

Journal ArticleDOI
TL;DR: The ergodic capacity of multihop wireless networks in the presence of external interference is studied and a simple and general asymptotic expression for the error probability is presented and discussed.
Abstract: A study of the effect of cochannel interference on the performance of multihop wireless networks with amplify-and-forward (AF) relaying is presented. Considering that transmissions are performed over Rayleigh fading channels, first, the exact end-to-end signal-to-interference-plus-noise ratio (SINR) of the communication system is formulated and upper bounded. Then, the cumulative distribution function (cdf) and probability density function (pdf) of the upper bounded end-to-end SINR are determined. Based on those results, closed-form expression for the error probability is obtained. Furthermore, an approximate expression for the pdf of the instantaneous end-to-end SINR is derived, and based on this, a simple and general asymptotic expression for the error probability is presented and discussed. In addition, the ergodic capacity of multihop wireless networks in the presence of external interference is studied. Moreover, analytical comparisons between AF and decode-and-forward (DP) multihop in terms of error probability and ergodic capacity are presented. Finally, optimization of the power allocation at the network's transmit nodes and the positioning of the relays are addressed.

Journal ArticleDOI
TL;DR: In this article, the authors derived a theory for estimating Reynolds normal and shear stresses from PIV images with single-pixel resolution, based on the analysis of the correlation function to identify the probability density function from which the Reynolds stresses can be derived in a 2-dimensional regime.
Abstract: This article derives a theory for estimating Reynolds normal and shear stresses from PIV images with single-pixel resolution. The main idea is the analysis of the correlation function to identify the probability density function from which the Reynolds stresses can be derived in a 2-D regime. The work establishes a theoretical framework including the influence of the particle image diameter and the velocity gradients on the shape of the correlation function. Synthetic data sets are used for the validation of the proposed method. The application of the evaluation method on two experimental data sets shows that high resolution and accuracy are also obtained with experimental data. The approach is very general and can also be applied to correlation peaks that are obtained from sum-of-correlation PIV evaluations.

Journal ArticleDOI
01 Feb 2012
TL;DR: This paper takes a first significant step toward the convergence analysis of a canonical DE (DE/rand/1/bin) algorithm by deduces a time-recursive relationship for the probability density function (PDF) of the trial solutions, taking into consideration the DE-type mutation, crossover, and selection mechanisms.
Abstract: Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms of current interest. Since its inception in the mid 1990s, DE has been finding many successful applications in real-world optimization problems from diverse domains of science and engineering. This paper takes a first significant step toward the convergence analysis of a canonical DE (DE/rand/1/bin) algorithm. It first deduces a time-recursive relationship for the probability density function (PDF) of the trial solutions, taking into consideration the DE-type mutation, crossover, and selection mechanisms. Then, by applying the concepts of Lyapunov stability theorems, it shows that as time approaches infinity, the PDF of the trial solutions concentrates narrowly around the global optimum of the objective function, assuming the shape of a Dirac delta distribution. Asymptotic convergence behavior of the population PDF is established by constructing a Lyapunov functional based on the PDF and showing that it monotonically decreases with time. The analysis is applicable to a class of continuous and real-valued objective functions that possesses a unique global optimum (but may have multiple local optima). Theoretical results have been substantiated with relevant computer simulations.

Journal ArticleDOI
TL;DR: In this article, the probability density function over state space and its mean and covariance matrix are expressed analytically for all time via a special solution of the Fokker-Planck equations for deterministic Hamiltonian systems.
Abstract: One topic of recent interest in the field of space situational awareness is the accurate and consistent representation of an observed object’s uncertainty under nonlinear dynamics. This paper presents amethod of analytical nonlinear propagation of uncertainty under two-bodydynamics. In particular, the probability density function over state space and itsmean and covariancematrix are expressed analytically for all time via a special solution of the Fokker-Planck equations for deterministic Hamiltonian systems. The state transition tensor concept is used to express the solution flow of the dynamics. Some numerical examples, where a second-order state transition tensor is found to sufficiently capture the nonlinear effects, are also discussed.

Journal ArticleDOI
TL;DR: In this paper, the authors analyzed the probability density function (PDF) of the gas density in subsonic and supersonic, isothermal driven turbulence with a systematic set of hydrodynamical grid simulations with resolutions up to 1024^3 cells.
Abstract: The probability density function (PDF) of the gas density in subsonic and supersonic, isothermal, driven turbulence is analyzed with a systematic set of hydrodynamical grid simulations with resolutions up to 1024^3 cells. We performed a series of numerical experiments with root mean square (r.m.s.) Mach number M ranging from the nearly incompressible, subsonic (M=0.1) to the highly compressible, supersonic (M=15) regime. We study the influence of two extreme cases for the driving mechanism by applying a purely solenoidal (divergence-free) and a purely compressive (curl-free) forcing field to drive the turbulence. We find that our measurements fit the linear relation between the r.m.s. Mach number and the standard deviation of the density distribution in a wide range of Mach numbers, where the proportionality constant depends on the type of the forcing. In addition, we propose a new linear relation between the standard deviation of the density distribution and the standard deviation of the velocity in compressible modes, i.e. the compressible component of the r.m.s. Mach number. In this relation the influence of the forcing is significantly reduced, suggesting a linear relation between the standard deviation of the density distribution and the standard deviation of the velocity in compressible modes, independent of the forcing, ranging from the subsonic to the supersonic regime.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the beta power distribution and derived explicit expressions for the moments, probability weighted moments, moment generating function, mean deviations, Bonferroni and Lorenz curves, moments of order statistics, entropy and reliability.
Abstract: The power distribution is defined as the inverse of the Pareto distribution. We study in full detail a distribution so-called the beta power distribution. We obtain analytical forms for its probability density and hazard rate functions. Explicit expressions are derived for the moments, probability weighted moments, moment generating function, mean deviations, Bonferroni and Lorenz curves, moments of order statistics, entropy and reliability. We estimate the parameters by maximum likelihood. The practicability of the model is illustrated in two applications to real data.

Journal ArticleDOI
01 Oct 2012
TL;DR: In this paper, a second-order reliability method (SORM) using non-central or general chi-squared distribution was proposed to improve the accuracy of reliability analysis in existing SORM.
Abstract: This paper proposes a novel second-order reliability method (SORM) using non-central or general chi-squared distribution to improve the accuracy of reliability analysis in existing SORM. Conventional SORM contains three types of errors: (1) error due to approximating a general nonlinear limit state function by a quadratic function at most probable point (MPP) in the standard normal U-space, (2) error due to approximating the quadratic function in U-space by a hyperbolic surface, and (3) error due to calculation of the probability of failure after making the previous two approximations. The proposed method contains the first type of error only which is essential to SORM and thus cannot be improved. However, the proposed method avoids the other two errors by describing the quadratic failure surface with the linear combination of non-central chi-square variables and using the linear combination for the probability of failure estimation. Two approaches for the proposed SORM are suggested in the paper. The first approach directly calculates the probability of failure using numerical integration of the joint probability density function (PDF) over the linear failure surface and the second approach uses the cumulative distribution function (CDF) of the linear failure surface for the calculation of the probability of failure. The proposed method is compared with first-order reliability method (FORM), conventional SORM, and Monte Carlo simulation (MCS) results in terms of accuracy. Since it contains fewer approximations, the proposed method shows more accurate reliability analysis results than existing SORM without sacrificing efficiency.Copyright © 2012 by ASME

Journal ArticleDOI
TL;DR: In this article, the joint probability density function (jpdf) of the maximum and its position for N non-intersecting Brownian excursions, on the unit time interval, in the large N limit was derived.
Abstract: We compute the joint probability density function (jpdf) P N (M,τ M ) of the maximum M and its position τ M for N non-intersecting Brownian excursions, on the unit time interval, in the large N limit. For N→∞, this jpdf is peaked around $M = \sqrt{2N}$ and τ M =1/2, while the typical fluctuations behave for large N like $M - \sqrt{2N} \propto s N^{-1/6}$ and τ M −1/2∝wN −1/3 where s and w are correlated random variables. One obtains an explicit expression of the limiting jpdf P(s,w) in terms of the Tracy-Widom distribution for the Gaussian Orthogonal Ensemble (GOE) of Random Matrix Theory and a psi-function for the Hastings-McLeod solution to the Painleve II equation. Our result yields, up to a rescaling of the random variables s and w, an expression for the jpdf of the maximum and its position for the Airy2 process minus a parabola. This latter describes the fluctuations in many different physical systems belonging to the Kardar-Parisi-Zhang (KPZ) universality class in 1+1 dimensions. In particular, the marginal probability density function (pdf) P(w) yields, up to a model dependent length scale, the distribution of the endpoint of the directed polymer in a random medium with one free end, at zero temperature. In the large w limit one shows the asymptotic behavior logP(w)∼−w 3/12.

Journal ArticleDOI
TL;DR: It is demonstrated by the analysis and simulation that max-min criterion based path selection works very well in the multi-hop DF cooperative system.
Abstract: In this letter, we find cumulative distribution function (CDF) and probability density function (PDF) of the generalized max-min Exponential random variable (RV) in terms of converging power series. By using the PDF of this RV, the average bit error rate (BER) of the max-min criterion based best path selection scheme in multi-hop decode-and-forward (DF) cooperative communication system over Rayleigh fading channels is analyzed. It is demonstrated by the analysis and simulation that max-min criterion based path selection works very well in the multi-hop DF cooperative system.

Journal ArticleDOI
TL;DR: In this article, the joint response-excitation probability density function of a stochastic dynamical system driven by coloured noise was derived by using functional integral methods, which can be represented in terms of a superimposition of differential constraints.
Abstract: By using functional integral methods, we determine a computable evolution equation for the joint response-excitation probability density function of a stochastic dynamical system driven by coloured noise. This equation can be represented in terms of a superimposition of differential constraints, i.e. partial differential equations involving unusual limit partial derivatives, the first one of which was originally proposed by Sapsis & Athanassoulis. A connection with the classical response approach is established in the general case of random noise with arbitrary correlation time, yielding a fully consistent new theory for non-Markovian systems. We also address the question of computability of the joint response-excitation probability density function as a solution to a boundary value problem involving only one differential constraint. By means of a simple analytical example, it is shown that, in general, such a problem is undetermined, in the sense that it admits an infinite number of solutions. This issue can be overcome by completing the system with additional relations yielding a closure problem, which is similar to the one arising in the standard response theory. Numerical verification of the equations for the joint response-excitation density is obtained for a tumour cell growth model under immune response.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a novel approach for the diagnosis of gearboxes in presumably nonstationary and unknown operating conditions based on the Renyi entropy derived from coefficients of the wavelet packet transform of measured vibration records.

Journal Article
TL;DR: This paper defines two notions of instability to measure the variability of L(λ) and T as a function of h, and investigates the theoretical properties of these instability measures.
Abstract: High density clusters can be characterized by the connected components of a level set L(λ) = {x : p(x) > λ} of the underlying probability density function p generating the data, at some appropriate level λ ≥ 0. The complete hierarchical clustering can be characterized by a cluster tree T = ∪λ L(λ). In this paper, we study the behavior of a density level set estimate L(λ) and cluster tree estimate T based on a kernel density estimator with kernel bandwidth h. We define two notions of instability to measure the variability of L(λ) and T as a function of h, and investigate the theoretical properties of these instability measures.

Journal ArticleDOI
TL;DR: In this article, a Monte Carlo analysis was performed, which consisted of drawing a large number (500) of identically distributed input attributes from the multivariable joint probability distribution function.
Abstract: Soil erosion is one of most widespread process of degradation. The erodibility of a soil is a measure of its susceptibility to erosion and depends on many soil properties. Soil erodibility factor varies greatly over space and is commonly estimated using the revised universal soil loss equation. Neglecting information about estimation uncertainty may lead to improper decision-making. One geostatistical approach to spatial analysis is sequential Gaussian simulation, which draws alternative, equally probable, joint realizations of a regionalised variable. Differences between the realizations provide a measure of spatial uncertainty and allow us to carry out an error analysis. The objective of this paper was to assess the model output error of soil erodibility resulting from the uncertainties in the input attributes (texture and organic matter). The study area covers about 30 km2 (Calabria, southern Italy). Topsoil samples were collected at 175 locations within the study area in 2006 and the main chemical and physical soil properties were determined. As soil textural size fractions are compositional data, the additive-logratio (alr) transformation was used to remove the non-negativity and constant-sum constraints on compositional variables. A Monte Carlo analysis was performed, which consisted of drawing a large number (500) of identically distributed input attributes from the multivariable joint probability distribution function. We incorporated spatial cross-correlation information through joint sequential Gaussian simulation, because model inputs were spatially correlated. The erodibility model was then estimated for each set of the 500 joint realisations of the input variables and the ensemble of the model outputs was used to infer the erodibility probability distribution function. This approach has also allowed for delineating the areas characterised by greater uncertainty and then to suggest efficient supplementary sampling strategies for further improving the precision of K value predictions.

Journal ArticleDOI
TL;DR: In this article, a more convenient way to deal with the uncertainty of a soil property due to spatial variability, by constraining the generated random field at the locations of actual field measurements, is presented.
Abstract: Spatial variability of soil properties is inherent in soil deposits, whether as a result of natural geological processes or engineering construction. It is therefore important to account for soil variability in geotechnical design in order to represent more realistically a soil’s in situ state. This variability may be modelled as a random field, with a given probability density function and scale of fluctuation. A more convenient way to deal with the uncertainty of a soil property due to spatial variability, by constraining the generated random field at the locations of actual field measurements, is presented in this article. Conditioning the random field at known locations is a powerful tool, not only because it more accurately represents the observed variability on site, but also because it uses the available field information more efficiently. In situ cone penetration test (CPT) data from a particular test site are used to determine the input statistics for generating random fields, which are later constrained (conditioned) at the locations of actual CPT measurements using the Kriging interpolation method. The results from the conditional random fields are then analysed, to quantify how the number of field measurements used influences the reduction of uncertainty. It is shown that the spatial uncertainty relative to the original (unconditional) random field reduces with the number of CPTs used in the conditioning.

Journal ArticleDOI
TL;DR: In this article, a general inference framework for marked Poisson processes observed over time or space is proposed, which exploits the connection of nonhomogeneous Poisson process intensity with a density function.
Abstract: We propose a general inference framework for marked Poisson processes observed over time or space. Our modeling approach exploits the connection of nonhomogeneous Poisson process intensity with a density function. Nonparametric Dirichlet process mixtures for this density, combined with nonparametric or semiparametric modeling for the mark distribution, yield flexible prior models for the marked Poisson process. In particular, we focus on fully nonparametric model formulations that build the mark density and intensity function from a joint nonparametric mixture, and provide guidelines for straightforward application of these techniques. A key feature of such models is that they can yield flexible inference about the conditional distribution for multivariate marks without requiring specification of a complicated dependence scheme. We address issues relating to choice of the Dirichlet process mixture kernels, and develop methods for prior specification and posterior simulation for full inference about functionals of the marked Poisson process. Moreover, we discuss a method for model checking that can be used to assess and compare goodness of fit of different model specifications under the proposed framework. The methodology is illustrated with simulated and real data sets.

Journal ArticleDOI
TL;DR: In this article, the authors show that the marginal likelihood can be reliably computed from a posterior sample using Lebesgue integration theory in one of two ways: (1) when the HMA integral exists, compute the measure function numerically and analyze the resulting quadrature to control error; (2) compute the metric functions numerically using a space-partitioning tree, followed by quadratures.
Abstract: Determining the marginal likelihood from a simulated posterior distribution is central to Bayesian model selection but is computationally challenging. The often-used harmonic mean approximation (HMA) makes no prior assumptions about the character of the distribution but tends to be inconsistent. The Laplace approximation is stable but makes strong, and often inappropriate, assumptions about the shape of the posterior distribution. Here, I argue that the marginal likelihood can be reliably computed from a posterior sample using Lebesgue integration theory in one of two ways: 1) when the HMA integral exists, compute the measure function numerically and analyze the resulting quadrature to control error; 2) compute the measure function numerically for the marginal likelihood integral itself using a space-partitioning tree, followed by quadrature. The first algorithm automatically eliminates the part of the sample that contributes large truncation error in the HMA. Moreover, it provides a simple graphical test for the existence of the HMA integral. The second algorithm uses the posterior sample to assign probability to a partition of the sample space and performs the marginal likelihood integral directly. It uses the posterior sample to discover and tessellate the subset of the sample space that was explored and uses quantiles to compute a representative field value. When integrating directly, this space may be trimmed to remove regions with low probability density and thereby improve accuracy. This second algorithm is consistent for all proper distributions. Error analysis provides some diagnostics on the numerical condition of the results in both cases.