scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 1994"


Journal ArticleDOI
TL;DR: In this paper, Wu et al. studied the problem of constructing confidence regions by approximating the sampling distribution of some statistic, where the true sampling distribution is estimated by an appropriate normalization of the values of the statistic computed over subsamples of the data.
Abstract: In this article, the construction of confidence regions by approximating the sampling distribution of some statistic is studied. The true sampling distribution is estimated by an appropriate normalization of the values of the statistic computed over subsamples of the data. In the i.i.d. context, the method has been studied by Wu in regular situations where the statistic is asymptotically normal. The goal of the present work is to prove the method yields asymptotically valid confidence regions under minimal conditions. Essentially, all that is required is that the statistic, suitably normalized, possesses a limit distribution under the true model. Unlike the bootstrap, the convergence to the limit distribution need not be uniform in any sense. The method is readily adapted to parameters of stationary time series or, more generally, homogeneous random fields. For example, an immediate application is the construction of a confidence interval for the spectral density function of a homogeneous random field.

756 citations


Journal ArticleDOI
TL;DR: The theory of linear mean square estimation for complex signals exhibits some connections with circularity, and it is shown that without this assumption, the estimation theory must be reformulated.
Abstract: Circularity is an assumption that was originally introduced for the definition of the probability distribution function of complex normal vectors. However, this concept can be extended in various ways for nonnormal vectors. The first purpose of the paper is to introduce and compare some possible definitions of circularity. From these definitions, it is also possible to introduce the concept of circular signals and to study whether or not the spectral representation of stationary signals introduces circular components. Therefore, the relationships between circularity and stationarity are analyzed in detail. Finally, the theory of linear mean square estimation for complex signals exhibits some connections with circularity, and it is shown that without this assumption, the estimation theory must be reformulated. >

541 citations


Journal ArticleDOI
TL;DR: In this paper, the fast chemistry reaction, Fuel + r Oxidizer → (1 + r) Product, was modeled in the context of a large eddy simulation (LES).
Abstract: A method is presented whereby the fast chemistry reaction, Fuel+(r)Oxidizer →(1+r) Product, may be modeled in the context of a large eddy simulation (LES). The model is based on a presumed form for the subgrid‐scale probability density function (PDF) of a conserved scalar. The nature of the subgrid‐scale statistics is discussed and it is shown that a beta function representation of the subgrid‐scale PDF is appropriate. Data from both laboratory experiments and direct numerical simulations (DNS) are used to show that the predictions of the model are very accurate, given the exact values for the filtered scalar and its variance. A possible model for this variance is presented based on scale similarity.

361 citations


Journal ArticleDOI
Yanqin Fan1
TL;DR: In this article, the goodness-of-fit of a distribution function defined on the probability space (Ω,P), which is absolutely continuous with respect to the Lebesgue measure in Rd with probability density function f, is investigated.
Abstract: Let F denote a distribution function defined on the probability space (Ω,,P), which is absolutely continuous with respect to the Lebesgue measure in Rd with probability density function f. Let f0(·,β) be a parametric density function that depends on an unknown p × 1 vector β. In this paper, we consider tests of the goodness-of-fit of f0(·,β) for f(·) for some β based on (i) the integrated squared difference between a kernel estimate of f(·) and the quasimaximum likelihood estimate of f0(·,β) denoted by In and (ii) the integrated squared difference between a kernel estimate of f(·) and the corresponding kernel smoothed estimate of f0(·, β) denoted by Jn. It is shown in this paper that the amount of smoothing applied to the data in constructing the kernel estimate of f(·) determines the form of the test statistic based on In. For each test developed, we also examine its asymptotic properties including consistency and the local power property. In particular, we show that tests developed in this paper, except the first one, are more powerful than the Kolmogorov-Smirnov test under the sequence of local alternatives introduced in Rosenblatt [12], although they are less powerful than the Kolmogorov-Smirnov test under the sequence of Pitman alternatives. A small simulation study is carried out to examine the finite sample performance of one of these tests.

155 citations


Journal ArticleDOI
TL;DR: This paper reformulated the rate-distortion problem in terms of the optimal mapping from the unit interval with Lebesgue measure that would induce the desired reproduction probability density and shows how the number of "symbols" grows as the system undergoes phase transitions.
Abstract: In rate-distortion theory, results are often derived and stated in terms of the optimizing density over the reproduction space. In this paper, the problem is reformulated in terms of the optimal mapping from the unit interval with Lebesgue measure that would induce the desired reproduction probability density. This results in optimality conditions that are "random relatives" of the known Lloyd (1982) optimality conditions for deterministic quantizers. The validity of the mapping approach is assured by fundamental isomorphism theorems for measure spaces. We show that for the squared error distortion, the optimal reproduction random variable is purely discrete at supercritical distortion (where the Shannon (1948) lower bound is not tight). The Gaussian source is thus the only source that produces continuous reproduction variables for the entire range of positive rate. To analyze the evolution of the optimal reproduction distribution, we use the mapping formulation and establish an analogy to statistical mechanics. The solutions are given by the distribution at isothermal statistical equilibrium, and are parameterized by the temperature in direct correspondence to the parametric solution of the variational equations in rate-distortion theory. The analysis of an annealing process shows how the number of "symbols" grows as the system undergoes phase transitions. Thus, an algorithm based on the mapping approach often needs but a few variables to find the exact solution, while the Blahut (1972) algorithm would only approach it at the limit of infinite resolution. Finally, a quick "deterministic annealing" algorithm to generate the rate-distortion curve is suggested. The resulting curve is exact as long as continuous phase transitions in the process are accurately followed. >

151 citations


Journal ArticleDOI
TL;DR: In this paper, a statistical description of heterogeneities in gravel aquifers is presented, which is used to generate distinct numerical realizations by unconditional stochastic simulation, having the same statistical properties with respect to hydraulic conductivity and porosity as the investigated deposits.
Abstract: Transport of solutes in groundwater is decisively influenced by the heterogeneity of the aquifer. The goal of the present work is the numerical generation of synthetic heterogeneous aquifer models based on a statistical description of heterogeneities in gravel aquifers. Large unweathered outcrops in several gravel pits in northeastern Switzerland were investigated in this context. On the basis of sedimentological observations it was possible to specify distinct sedimentary structures appearing as lenses or layers. All structures have been identified in each of the investigated outcrops. By inspecting and analyzing photographs of the outcrops, it was possible to estimate the probability density functions (pdf) of the geometrical attributes of the sedimentary structures. Disturbed and undisturbed samples were taken from these structures to estimate the pdf of their hydraulic properties, that is, the hydraulic conductivity and the porosity. The information obtained is used to generate distinct numerical realizations by unconditional stochastic simulation of synthetic aquifers having the same statistical properties with respect to hydraulic conductivity and porosity as the investigated deposits.

150 citations


Journal ArticleDOI
TL;DR: In this article, the authors study the quasilinear evolution of the one-point probability density functions (PDFs) of the smoothed density and velocity fields in a cosmological gravitating system beginning with Gaussian initial fluctuations.
Abstract: We study the quasilinear evolution of the one-point probability density functions (PDFs) of the smoothed density and velocity fields in a cosmological gravitating system beginning with Gaussian initial fluctuations. Our analytic results are based on the Zel'dovich approximation and laminar flow. A numerical analysis extends the results into the multistreaming regime using the smoothed fields of a CDM N-body simulation. We find that the PDF of velocity, both Lagrangian and Eulerian, remains Gaussian under the laminar Zel'dovich approximation, and it is almost indistinguishable from Gaussian in the simulations. The PDF of mass density deviates from a normal distribution early in the quasilinear regime and it develops a shape remarkably similar to a lognormal distribution with one parameter, the \rms density fluctuation $\sigma$. Applying these results to currently available data we find that the PDFs of the velocity and density fields, as recovered by the \pot\ procedure from observed velocities assuming $\Omega=1$, or as deduced from a redshift survey of \iras\ galaxies assuming that galaxies trace mass, are consistent with Gaussian initial fluctuations.

146 citations


Journal ArticleDOI
TL;DR: In this article, an overview of the current status of inverse methods and data assimilation for nonlinear ocean models is given, and the most promising solution methods like simulated annealing, the representer method, and sequential methods based on Monte Carlo simulations are discussed with special focus on applications with nonlinear dynamics.

146 citations


Proceedings ArticleDOI
08 Aug 1994
TL;DR: In this article, a K-distribution was developed to characterize the statistical properties of multi-look processed polarimetric SAR data, where the probability density function (PDF) was derived as the product of a gamma distributed random variable and the polarIMetric covariance matrix.
Abstract: A K-distribution has been developed to characterize the statistical properties of multi-look processed polarimetric SAR data. The probability density function (PDF) was derived as the product of a gamma distributed random variable and the polarimetric covariance matrix. The latter characterizes the speckle and the former depicts the inhomogeneity (texture). For multi-look data incoherently averaged from correlated one-look samples, the authors found that, for better modeling, the number of looks has to assume a non-integer value. A procedure was developed to estimate the equivalent number of looks and the parameter of the K-distribution. Experimental results using NASA/JPL 4-look and 16-1ook polarimetric SAR data substantiated this multi-look K-distribution. The authors also found that the multi-look process reduced the inhomogeneity and made the K-distribution less significant. >

145 citations


Journal ArticleDOI
TL;DR: This short communication examines the different forms that have been presented in the literature for the log-normal distribution, properly interprets the parameters that appear in these functions, and provides the appropriate equations required to transform between these different distributions and properly evaluate the appropriate statistical parameters.

137 citations


Journal ArticleDOI
TL;DR: The authors show that estimation of signatures from averaged covariance matrices results in smaller biases and variances than averaging single-look signature estimates, and estimate the precision of polarimetric-signature estimates as a function of the number of SAR looks and other system parameters.
Abstract: Derives closed-form expressions for the probability density functions (PDF's) for copolar and cross-polar ratios and for the copolar phase difference for multilook polarimetric SAR data, in terms of elements of the covariance matrix for the backscattering process. The authors begin with the case in which scattering-matrix data are jointly Gaussian-distributed. The resulting copolar-phase PDF is formally identical to the phase PDF arising in the study of SAR interferometry, so the authors' results also apply in that setting. By direct simulation, they verify the closed-form PDF's. They show that estimation of signatures from averaged covariance matrices results in smaller biases and variances than averaging single-look signature estimates. They then generalize their derivation to certain cases in which backscattered intensities and amplitudes are K-distributed. They find in a range of circumstances that the PDF's of polarimetric signatures are unchanged from those derived in the Gaussian case. They verify this by direct simulation, and also examine a case that fails to satisfy an important assumption in their derivation. The forms of the signature distributions continue to describe data well in the latter case, but parameters in distributions fitted to (simulated) data differ from those used to generate the data. Finally, the authors examine samples of K-distributed polarimetric SAR data from Arctic sea ice and find that their theoretical distributions describe the data well with a plausible choice of parameters. This allows the authors to estimate the precision of polarimetric-signature estimates as a function of the number of SAR looks and other system parameters. >

Journal ArticleDOI
TL;DR: In this article, a modification of kernel density estimation is proposed, where the first step is ordinary kernel estimation of the density and its cdf, and the second step the data are transformed, using this estimated cDF, to an approximate uniform (or normal or other target) distribution.
Abstract: A modification of kernel density estimation is proposed. The first step is ordinary kernel estimation of the density and its cdf. In the second step the data are transformed, using this estimated cdf, to an approximate uniform (or normal or other target) distribution. The density and cdf of the transformed data are then estimated by the kernel method and, by change of variable, converted to new estimates of the density and the cdf of the original data. This process is repeated for a total of $k$ steps for some integer $k$ greater than 1. If the target density is uniform, then the order of the bias is reduced, provided that the density of the observed data is sufficiently smooth. By proper choice of bandwidth, rates of squared-error convergence equal to those of higher-order kernels are attainable. More precisely, $k$ repetitions of the process are equivalent, in terms of rate of convergence, to a $2k$-th-order kernel. This transformation-kernel estimate is always a bona fide density and appears to be more effective at small sample sizes than higher-order kernel estimators, at least for densities with interesting features such as multiple modes. The main theoretical achievement of this paper is the rigorous establishment of rates of convergence under multiple iteration. Simulations using a uniform target distribution suggest that the possibility of improvement over ordinary kernel estimation is of practical significance for samples sizes as low as 100 and can become appreciable for sample sizes around 400.

Journal ArticleDOI
TL;DR: In this article, a sensitivity analysis of a land-surface parameterization for atmospheric modeling was performed to evaluate the surface parameters most important to the variability of surface heat fluxes, and the Fourier amplitude sensitivity test (FAST) was used for this analysis.
Abstract: Land-surface parameterizations based on a statistical-dynamical have been suggested recently to improve the representation of the surface forcing from heterogeneous land in atmospheric models. With this approach, land-surface characteristics are prescribed by probability density functions (PDFs) rather than single 'representative' values as in 'big-leaf' parameterizations. Yet the use of many PDFs results in an increased computational burden and requires the complex problem of representing covariances between PDFs to be addressed. In this study, a sensitivity analysis of a land-surface parameterization for atmospheric modeling was performed to evaluate the surface parameters most important to the variability of surface heat fluxes. The Fourier amplitude sensitivity test (FAST) used for this analysis determines the relative contribution of individual input parameters to the variance of energy fluxes resulting from a heterogeneous surface. By simultaneously varying all parameters according to their individual probability density functions, the number of computations needed is very much reduced by this technique. This analysis demonstrates that most of the variability of surface heat fluxes may be described by the distributions of relative stomatal conductance and surface roughness. Thus, the statistical-dynamical approach may be simplified by the use of only these two probability density functions.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the properties of the probability distribution function of the cosmological continuous density field and compared dynamically motivated methods to derive the PDF, based on the regularization of integrals.
Abstract: The properties of the probability distribution function of the cosmological continuous density field are studied. We present further developments and compare dynamically motivated methods to derive the PDF. One of them is based on the Zel'dovich approximation (ZA). We extend this method for arbitrary initial conditions, regardless of whether they are Gaussian or not. The other approach is based on perturbation theory with Gaussian initial fluctuations. We include the smoothing effects in the PDFs. We examine the relationships between the shapes of the PDFs and the moments. It is found that formally there are no moments in the ZA, but a way to resolve this issue is proposed, based on the regularization of integrals. A closed form for the generating function of the moments in the ZA is also presented, including the smoothing effects. We suggest the methods to build PDFs out of the whole series of the moments, or out of a limited number of moments -- the Edgeworth expansion. The last approach gives us an alternative method to evaluate the skewness and kurtosis by measuring the PDF around its peak. We note a general connection between the generating function of moments for small r.m.s $\sigma$ and the non-linear evolution of the overdense spherical fluctuation in the dynamical models. All these approaches have been applied in 1D case where the ZA is exact, and simple analytical results are obtained. The 3D case is analyzed in the same manner and we found a mutual agreement in the PDFs derived by different methods in the the quasi-linear regime. Numerical CDM simulation was used to validate the accuracy of considered approximations. We explain the successful log-normal fit of the PDF from that simulation at moderate $\sigma$ as mere fortune, but not as a universal form of density PDF in general.

Journal ArticleDOI
TL;DR: In this article, the authors examined the probability density functions (PDFs) of the strain-rate tensor eigenvalues and found that the accepted normalization used to bound the intermediate eigenvalue between ±1 leads to a PDF that must vanish at the end points for a non-singular distribution of strain states.
Abstract: Probability density functions (PDFs) of the strain‐rate tensor eigenvalues are examined. It is found that the accepted normalization used to bound the intermediate eigenvalue between ±1 leads to a PDF that must vanish at the end points for a non‐singular distribution of strain states. This purely kinematic constraint has led previous investigators to conclude incorrectly that locally axisymmetric deformations do not exist in turbulent flows. An alternative normalization is presented that does not bias the probability distribution near the axisymmetric limits. This alternative normalization is shown to lead to the expected flat PDF in a Gaussian velocity field and to a PDF that indicates the presence of axisymmetric strain states in a turbulent field. Extension of the new measure to compressible flow is discussed. Several earlier results concerning the likelihood of various strain states and the correlation of these with elevated kinetic energy dissipation rate are reinterpreted in terms of the new normali...

Journal ArticleDOI
TL;DR: The Liouville Equation (Liouville equation) as discussed by the authors provides a framework for the consistent and comprehensive treatment of the uncertainty inherent in meteorological forecasts, in which the conservation of the phase-space integral of the number density of realizations of a dynamical system originating at the same time instant from different initial conditions, in a way completely analogous to the continuity equation for mass in fluid mechanics.
Abstract: The Liouville equation provides the framework for the consistent and comprehensive treatment of the uncertainty inherent in meteorological forecasts. This equation expresses the conservation of the phase-space integral of the number density of realizations of a dynamical system originating at the same time instant from different initial conditions, in a way completely analogous to the continuity equation for mass in fluid mechanics. Its solution describes the temporal development of the probability density function of the state vector of a given dynamical model. Consideration of the Liouville equation ostensibly avoids in a natural way the problems inherent to more standard methodology for predicting forecast skill, such as the need for higher-moment closure within stochastic-dynamic prediction, or the need to generate a large number of realizations within ensemble forecasting. These benefits, however, are obtained only at the expense of considering high-dimensional problems. The purpose of this ...

Journal ArticleDOI
TL;DR: In this article, a lower bound for the plane-parallel albedo bias was obtained from a fractal model having a range of optical thicknesses similar to those observed in marine stratocumulus.
Abstract: . If climate models produced clouds having liquid water amounts close to those observed, they would compute a mean albedo that is often much too large, due to the treatment of clouds as plane-parallel. An approximate lower-bound for this "plane-parallel albedo bias" may be obtained from a fractal model having a range of optical thicknesses similar to those observed in marine stratocumulus, since they are more nearly plane-parallel than most other cloud types. We review and extend results from a model which produces a distribution of liquid water path having a lognormal-like probability density and a power-law wavenumber spectrum, with parameters determined by stratocumulus observations. As the spectral exponent approaches -1, the simulated cloud approaches a well-known multifractal, referred to as the "singular model", but when the exponent is -5/3, similar to what is observed, the cloud exhibits qualitatively different scaling properties, the socalled "bounded model". The mean albedo for bounded cascade clouds is a function of a fractal parameter, 0

Journal ArticleDOI
TL;DR: In this article, it was shown that the greatest amount of probability which can flow back from positive to negative x-values in this counter-intuitive way, over any given time interval, is equal to the largest eigenvalue of a certain Hermitian operator, and it is estimated numerically to have a value near 0.04.
Abstract: Pure states of a free particle in non-relativistic quantum mechanics are described, in which the probability of finding the particle to have a negative x-coordinate increases over an arbitrarily long, but finite, time interval, even though the x-component of the particle's velocity is certainly positive throughout that time interval. It is shown that, for any state of this type, the greatest amount of probability which can flow back from positive to negative x-values in this counter-intuitive way, over any given time interval, is equal to the largest eigenvalue of a certain Hermitian operator, and it is estimated numerically to have a value near 0.04. This value is not only independent of the length of the time interval and the mass of the particle, but is also independent of the value of Planck's constant. It reflects the structure of Schrodinger's equation, rather than the values of the parameters appearing there. Backflow of positive probability is related to the non-positivity of Wigner's density function, and can be regarded as arising from a flow of negative probability in the same direction as the velocity. Generalizations are indicated, to the relativistic free electron, and to non-relativistic cases in which probability backflow occurs even in opposition to an arbitrarily strong constant force.

Journal ArticleDOI
TL;DR: In this paper, a nonlinearity in the wind loading expression for a complaint offshore structure, e.g., a tension leg platform (TLP), results in response statistics that deviate from the Gaussian distribution.
Abstract: The nonlinearity in the wind loading expression for a complaint offshore structure, e.g., a tension leg platform (TLP), results in response statistics that deviate from the Gaussian distribution. This paper focuses on the statistical analysis of the response of these structures to random wind loads. The analysis presented here involves a nonlinear system with memory. As an improvement over the commonly used linearization approach, an equivalent statistical quadratization method is presented. The higher-order response cumulants are based on Volterra series. A direct integration scheme and Kac-Siegert technique is utilized to evaluate the response cumulants. Based on the first four cumulants, the response probability density function, crossing rates, and peak value distribution are derived. The results provide a good comparison with simulation. A nonlinear wind gust loading factor based on the derived extreme value distribution of nonlinear wind effects is formulated.

Journal ArticleDOI
TL;DR: In this article, the authors carried out numerical simulations of wave traversing a three-dimensional random medium with Gaussian statistics and a power-law spectrum with inner-scale cutoff and provided the probability density function (PDF) of irradiance.
Abstract: We have carried out numerical simulations of waves traversing a three-dimensional random medium with Gaussian statistics and a power-law spectrum with inner-scale cutoff. The distributions of irradiance on the final observation screen provide the probability-density function (PDF) of irradiance. For both initially plane and initially spherical waves the simulation PDF’s in the strong-fluctuation regime lie between a K distribution and a log-normal-convolved-with-exponential distribution. We introduce a plot of the PDF of scaled log-normal irradiance, on which both the exponential and the lognormal PDF’s are universal curves and on which the PDF at both large and small irradiance is shown in detail. We have simulated a spherical-wave experiment, including aperture averaging, and find agreement between the simulated and the observed PDF’s.

Journal ArticleDOI
Isaac Freund1
TL;DR: In this article, the statistical probability densities of the six parameters that define an optical vortex (phase singularity) in a Gaussian random wave field were analyzed and good agreement was found between calculation and a computer simulation that generates these vortices.
Abstract: Simple, closed-form analytical expressions are given for the statistical probability densities of the six parameters that define an optical vortex (phase singularity) in a Gaussian random wave field. Good agreement is found between calculation and a computer simulation that generates these vortices.

Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate how to employ a limited number of exactly specified moments to approximate the probability density and distribution functions of various random variables, using the technique of Pade approximations.
Abstract: The analysis of radar detection systems often requires extensive knowledge of the special functions of applied mathematics, and their computation. Yet, the moments of the detection random variable are often easily obtained. We demonstrate here how to employ a limited number of exactly specified moments to approximate the probability density and distribution functions of various random variables. The approach is to use the technique of Pade approximations (PA) which creates a pole-zero model of the moment generating function (mgf). This mgf is inverted using residues to obtain the densities. >

Journal ArticleDOI
TL;DR: An exact solution is presented of the Fokker-Planck equation which governs the evolution of an ensemble of disordered metal wires of increasing length, in a magnetic field, and the complete probability distribution function of the transmission eigenvalues is obtained.
Abstract: An exact solution is presented of the Fokker-Planck equation which governs the evolution of an ensemble of disordered metal wires of increasing length, in a magnetic field. By a mapping onto a free-fermion problem, the complete probability distribution function of the transmission eigenvalues is obtained. The logarithmic eigenvalue repulsion of random-matrix theory is shown to break down for transmission eigenvalues which are not close to unity. ***Submitted to Physical Review B.****

Journal ArticleDOI
TL;DR: In this article, Kolmogorov's refined similarity hypotheses hold true for a variety of stochastic processes besides high-Reynolds-number turbulent flows, for which they were originally proposed.
Abstract: Kolmogorov's refined similarity hypotheses are shown to hold true for a variety of stochastic processes besides high-Reynolds-number turbulent flows, for which they were originally proposed. In particular, just as hypothesized for turbulence, there exists a variable $V$ whose probability density function attains a universal form. Analytical expressions for the probability density function of $V$ are obtained for Brownian motion as well as for the general case of fractional Brownian motion---the latter under some mild assumptions justified a posteriori. The properties of $V$ for the case of antipersistent fractional Brownian motion with the Hurst exponent of $\frac{1}{3}$ are similar in many details to those of high-Reynolds-number turbulence in atmospheric boundary layers a few meters above the ground. The one conspicuous difference between turbulence and the antipersistent fractional Brownian motion is that the latter does not posses the required skewness. Broad implications of these results are discussed.

Journal ArticleDOI
TL;DR: In this paper, a methodology for generating synthetic modal fields which satisfy the von Karman covariance function is described, and a modality parameter which quantifies the variation between end members binary and continuous fields is defined.
Abstract: Geologically and petrophysically constrained synthetic random velocity fields are important tools for exploring (through the application of numerical codes) the seismic response of small-scale lithospheric heterogeneities. Statistical and geophysical analysis of mid- and lower-crustal exposures has demonstrated that the probability density function for some seismic velocity fields is likely to be discrete rather than continuous. We apply the term “modal” fields to describe fields of this sort. This letter details a methodology for generating synthetic modal fields which satisfy the von Karman covariance function. In addition, we explore some of the mathematics of “modality”, and define a modality parameter which quantifies the variation between end members binary and continuous fields.

Journal ArticleDOI
TL;DR: In this paper, the average probability density P(r,t) of random walks on fractals is revisited within the continuous-time random walks formalism, and corrections to the accepted asymptotic stretched Gaussian decay of the form ralpha are discussed.
Abstract: The average probability density P(r,t) of random walks on fractals is revisited within the continuous-time random walks formalism. Corrections to the accepted asymptotic stretched Gaussian decay of P(r,t) of the form ralpha are discussed. It is shown that P(r,t) obeys a diffusion equation with a fractional time derivative asymptotically, and predictions about the value of alpha are presented.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce density estimation for functions of observations, such as the interpoint distance between observations arising in spatial statistics from the fields of biometry and regional science, which can be viewed without the restriction of a priori imposing limitations of a class of parametric curves.
Abstract: Density estimates, such as histograms and more sophisticated versions, are important in applied and theoretical statistics. In applied statistics, a density estimate gives the data analyst a graphical overview of the shape of the distribution. This overview allows the data analyst to arrive immediately at a qualitative impression of the location, scale, and various aspects of the extremes of the distribution. In theoretical statistics, the shape of the density allows the researcher to link the data to families of curves, perhaps indexed parametrically. By estimating a density nonparametrically, certain aspects of the data can be viewed without the restriction of a priori imposing limitations of a class of parametric curves. In this article we introduce density estimation for functions of observations. To motivate the study, one type of function used is the interpoint distance between observations arising in spatial statistics from the fields of biometry and regional science. A second type of func...

Journal ArticleDOI
TL;DR: In this article, the authors compare different nonlinear approximations to gravitational clustering in weakly nonlinear regime, using as a comparative statistic the evolution of non-Gaussianity which can be characterised by a set of numbers $S_p$ describing connected moments of the density field at the lowest order in $ $: $ _c \simeq S_n ^{n-1}$.
Abstract: We compare different nonlinear approximations to gravitational clustering in the weakly nonlinear regime, using as a comparative statistic the evolution of non-Gaussianity which can be characterised by a set of numbers $S_p$ describing connected moments of the density field at the lowest order in $ $: $ _c \simeq S_n ^{n-1}$. Generalizing earlier work by Bernardeau (1992) we develop an ansatz to evaluate all $S_p$ in a given approximation by means of a generating function which can be shown to satisfy the equations of motion of a homogeneous spherical density enhancement in that approximation. On the basis of the values of we show that approximations formulated in Lagrangian space (such as the Zeldovich approximation and its extensions) are considerably more accurate than those formulated in Eulerian space such as the Frozen Flow and Linear Potential approximations. In particular we find that the $n$th order Lagrangian perturbation approximation correctly reproduces the first $n+1$ parameters $S_n$. We also evaluate the density probability distribution function for the different approximations in the quasi-linear regime and compare our results with an exact analytic treatment in the case of the Zeldovich approximation.

Journal ArticleDOI
S. Colombi1
TL;DR: In this article, a Lagrangian version of the continuity equation is used to fit the probability distribution function (PDF) of the large-scale density field rho, which can be used as an analytical tool to study the effect on the PDF of the transition between the weakly nonlinear regime and the highly non-linear regime.
Abstract: I propose a method to fit the probability distribution function (PDF) of the large-scale density field rho, motivated by a Lagrangian version of the continuity equation. It consists in applying the Edgeworth expansion to the quantity Phi identical with log rho - mean value of log rho. The method is tested on the matter particle distribution in two cold dark matter N-body simulations of different physical sizes to cover a large dynamic range. It is seen to be very efficient, even in the nonlinear regime, and may thus be used as an analytical tool to study the effect on the PDF of the transition between the weakly nonlinear regime and the highly nonlinear regime.

Journal ArticleDOI
TL;DR: In this article, the probability density function (pdf) of optical signal intensity in an optical communication channel impaired by motion-induced beam jitter and turbulence is derived and the conditions for which these approximations seem to be valid are also discussed.
Abstract: Expressions for the probability density function (pdf) of optical signal intensity in an optical communication channel impaired by motion-induced beam jitter and turbulence are derived. It is assumed that the optical beam possesses a Gaussian profile, generated by a pulsed laser, and that the beam scintillation is governed by either log-normal distribution for weak turbulence or K distribution for moderate to strong turbulence in the saturation region. For extreme propagation distances or very strong turbulence, a negative exponential pdf is used to model turbulence. For the aforementioned beam scintillation statistics, approximate pdf's for the signal intensity are also obtained and the conditions for which these approximations seem to be valid are also discussed.