scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 1998"


Journal ArticleDOI
TL;DR: In this article, the authors investigate a new basic model for asset pricing, the hyperbolic model, which allows an almost perfect statistical fit of stock return data, and compare it to the classical Black-Scholes model.
Abstract: The authors investigate a new basic model for asset pricing, the hyperbolic model, which allows an almost perfect statistical fit of stock return data. After a detailed introduction into the theory they use secondary market data to compare the hyperbolic model to the classical Black-Scholes model. The authors study implicit volatilities, the smile effect, and pricing performance. Exploiting the full power of the hyperbolic model, they construct an option value process from a statistical point of view by estimating the implicit risk-neutral density function from option data. Finally, the authors present some new value-at-risk calculations leading to new perspectives to cope with model risk. Copyright 1998 by University of Chicago Press.

501 citations


Journal ArticleDOI
TL;DR: In this article, the authors used morphological information obtained from a 2D slice of a thin section of a random medium to reconstruct the full three-dimensional (3D) medium.
Abstract: We report on an investigation concerning the utilization of morphological information obtained from a two-dimensional (2D) slice (thin section) of a random medium to reconstruct the full three-dimensional (3D) medium. We apply a procedure that we developed in an earlier paper that incorporates any set of statistical correlation functions to reconstruct a Fontainebleau sandstone in three dimensions. Since we have available the experimentally determined 3D representation of the sandstone, we can probe the extent to which intrinsically 3D information (such as connectedness) is captured in the reconstruction. We considered reconstructing the sandstone using the two-point probability function and lineal-path function as obtained from 2D cuts (cross sections) of the sample. The reconstructions are able to reproduce accurately certain 3D properties of the pore space, such as the pore-size distribution, the mean survival time of a Brownian particle, and the fluid permeability. The degree of connectedness of the pore space also compares remarkably well with the actual sandstone. However, not surprisingly, visualization of the 3D pore structures reveals that the reconstructions are not perfect. A more refined reconstruction can be produced by incorporating higher-order information at the expense of greater computational cost. Finally, we remark that our reconstruction study sheds lightmore » on the nature of information contained in the employed correlation functions. thinsp {copyright} {ital 1998} {ital The American Physical Society}« less

475 citations


Journal ArticleDOI
TL;DR: The Metropolis-Hastings algorithm is a method of constructing a reversible Markov transition kernel with a specified invariant distribution as discussed by the authors, which is used to construct reversible transition kernels.
Abstract: The Metropolis-Hastings algorithm is a method of constructing a reversible Markov transition kernel with a specified invariant distribution. This note describes necessary and sufficient conditions on the candidate generation kernel and the acceptance probability function for the resulting transition kernel and invariant distribution to satisfy the detailed balance conditions. A simple general formulation is used that covers a range of special cases treated separately in the literature. In addition, results on a useful partial ordering of finite state space reversible transition kernels are extended to general state spaces and used to compare the performance of two approaches to using mixtures in Metropolis-Hastings kernels.

411 citations


Journal ArticleDOI
TL;DR: The simulation model's amplitude and phase probability density function (PDF) is investigated, but also higher order statistics [e.g., level-crossing rate (LCR) and average duration of fades (ADFs] are investigated].
Abstract: Rice's (9144, 1945) sum of sinusoids can be used for an efficient approximation of colored Gaussian noise processes and is therefore of great importance to the software and hardware realization of mobile fading channel models. Although several methods can be found in the literature for the computation of the parameters characterizing a proper set of sinusoids, less is reported about the statistical properties of the resulting (deterministic) simulation model. In this paper, not only is the simulation model's amplitude and phase probability density function (PDF) investigated, but also higher order statistics [e.g., level-crossing rate (LCR) and average duration of fades (ADFs)]. It is shown that due to the deterministic nature of the simulation model, analytical expressions for the PDF of the amplitude and phase, autocorrelation function (ACF), crosscorrelation function (CCF), LCR, and ADFs can be derived. We also propose a new procedure for the determination of an optimal set of sinusoids, i.e., the method results for a given number of sinusoids in an optimal approximation of Gaussian, Rayleigh, and Rice processes with given Doppler power spectral density (PSD) properties. It is shown that the new method can easily be applied to the approximation of various other kinds of distribution functions, such as the Nakagami (1960) and Weibull distributions. Moreover, a quasi-optimal parameter computation method is presented.

281 citations


Journal ArticleDOI
TL;DR: In this paper, the probability density function (PDF) formulation of one scalar field undergoing diffusion, turbulent convection and chemical reaction is restated in terms of stochastic fields.
Abstract: The probability density function (PDF) formulation of one scalar field undergoing diffusion, turbulent convection and chemical reaction is restated in terms of stochastic fields. These fields are smooth in space as they have a length scale similar to that of the PDF. Their evolution is described by a set of stochastic partial differential equations, which are solved using a finite volume scheme with a stochastic source term. The application of this methodology to a particular flow is shown first for a linear source term, with exact analytical solution for the mean and standard deviation, and then for a nonlinear reaction.

276 citations


Journal ArticleDOI
TL;DR: All moments of both the likelihood and the log likelihood under both hypotheses can be derived from this one function, and the AUC can be expressed, to an excellent approximation, in terms of the likelihood-generating function evaluated at the origin.
Abstract: We continue the theme of previous papers [J. Opt. Soc. Am. A 7, 1266 (1990); 12, 834 (1995)] on objective (task-based) assessment of image quality. We concentrate on signal-detection tasks and figures of merit related to the ROC (receiver operating characteristic) curve. Many different expressions for the area under an ROC curve (AUC) are derived for an arbitrary discriminant function, with different assumptions on what information about the discriminant function is available. In particular, it is shown that AUC can be expressed by a principal-value integral that involves the characteristic functions of the discriminant. Then the discussion is specialized to the ideal observer, defined as one who uses the likelihood ratio (or some monotonic transformation of it, such as its logarithm) as the discriminant function. The properties of the ideal observer are examined from first principles. Several strong constraints on the moments of the likelihood ratio or the log likelihood are derived, and it is shown that the probability density functions for these test statistics are intimately related. In particular, some surprising results are presented for the case in which the log likelihood is normally distributed under one hypothesis. To unify these considerations, a new quantity called the likelihood-generating function is defined. It is shown that all moments of both the likelihood and the log likelihood under both hypotheses can be derived from this one function. Moreover, the AUC can be expressed, to an excellent approximation, in terms of the likelihood-generating function evaluated at the origin. This expression is the leading term in an asymptotic expansion of the AUC; it is exact whenever the likelihood-generating function behaves linearly near the origin. It is also shown that the likelihood-generating function at the origin sets a lower bound on the AUC in all cases.

258 citations


Journal ArticleDOI
TL;DR: In this article, the form of the one-point probability density function (pdf) for the density field of the interstellar medium using numerical simulations that successively reduce the number of physical processes included was investigated.
Abstract: We investigate the form of the one-point probability density function (pdf) for the density field of the interstellar medium using numerical simulations that successively reduce the number of physical processes included. Two-dimensional simulations of self-gravitating supersonic MHD turbulence, of supersonic self-gravitating hydrodynamic turbulence, and of decaying Burgers turbulence produce in all cases filamentary density structures and evidence for a power-law density pdf at large densities with logarithmic slope between -1.7 and -2.3. This suggests that a power-law shape of the pdf and the general filamentary morphology are the signature of the nonlinear advection operator. These results do not support previous claims that the pdf is lognormal. A series of one-dimensional simulations of forced supersonic polytropic turbulence is used to resolve the discrepancy. They suggest that the pdf is lognormal only for effective polytropic indices γ = 1 (or nearly lognormal for γ ≠ 1 if the Mach number is sufficiently small), while power laws develop for densities larger than the mean if γ < 1. We evaluate the polytropic index for conditions relevant to the cool interstellar medium using published cooling functions and different heating sources, finding that a lognormal pdf should probably occur at densities around 103 and is possible at larger densities, depending strongly on the role of gas-grain heating and cooling. Several applications are examined. First, we question a recent derivation of the initial mass function from the density pdf by Padoan, Nordlund, & Jones because (1) the pdf does not contain spatial information and (2) their derivation produces the most massive stars in the voids of the density distribution. Second, we illustrate how a distribution of ambient densities can alter the predicted form of the size distribution of expanding shells. Finally, a brief comparison is made with the density pdfs found in cosmological simulations.

237 citations


Journal ArticleDOI
TL;DR: The integer least-squares problem associated with estimating the parameters can be solved efficiently in practice and sharp upper and lower bounds can be found on the probability of correct integer parameter estimation.
Abstract: We consider parameter estimation in linear models when some of the parameters are known to be integers. Such problems arise, for example, in positioning using carrier phase measurements in the global positioning system (GPS), where the unknown integers enter the equations as the number of carrier signal cycles between the receiver and the satellites when the carrier signal is initially phase locked. Given a linear model, we address two problems: (1) the problem of estimating the parameters and (2) the problem of verifying the parameter estimates. We show that with additive Gaussian measurement noise the maximum likelihood estimates of the parameters are given by solving an integer least-squares problem. Theoretically, this problem is very difficult computationally (NP-hard); verifying the parameter estimates (computing the probability of estimating the integer parameters correctly) requires computing the integral of a Gaussian probability density function over the Voronoi cell of a lattice. This problem is also very difficult computationally. However, by using a polynomial-time algorithm due to Lenstra, Lenstra, and Lovasz (1982), the LLL algorithm, the integer least-squares problem associated with estimating the parameters can be solved efficiently in practice; sharp upper and lower bounds can be found on the probability of correct integer parameter estimation. We conclude the paper with simulation results that are based on a synthetic GPS setup.

231 citations


Journal ArticleDOI
TL;DR: In this paper, a case study of the application of the Bayesian strategy to inversion of surface seismic field data is presented, where the authors use Bayes theorem to combine this probability with the data misfit function into a final a posteriori probability density reflecting both data fit and model reasonableness.
Abstract: The goal of geophysical inversion is to make quantitative inferences about the Earth from remote observations. Because the observations are finite in number and subject to uncertainty, these inferences are inherently probabilistic. A key step is to define what it means for an Earth model to fit the data. This requires estimation of the uncertainties in the data, both those due to random noise and those due to theoretical errors. But the set of models that fit the data usually contains unrealistic models; i.e., models that violate our a priori prejudices, other data, or theoretical considerations. One strategy for eliminating such unreasonable models is to define an a priori probability density on the space of models, then use Bayes theorem to combine this probability with the data misfit function into a final a posteriori probability density reflecting both data fit and model reasonableness. We show here a case study of the application of the Bayesian strategy to inversion of surface seismic field data. Assuming that all uncertainties can be described by multidimensional Gaussian probability densities, we incorporate into the calculation information about ambient noise, discretization errors, theoretical errors, and a priori information about the set of layered Earth models derived from in situ petrophysical measurements. The result is a probability density on the space of models that takes into account all of this information. Inferences on model parameters can be derived by integration of this function. We begin by estimating the parameters of the Gaussian probability densities assumed to describe the data and model uncertainties. These are combined via Bayes theorem. The a posteriori probability is then optimized via a nonlinear conjugate gradient procedure to find the maximum a posteriori model. Uncertainty analysis is performed by making a Gaussian approximation of the a posteriori distribution about this peak model. We present the results of this analysis in three different forms: the maximum a posteriori model bracketed by one standard deviation error bars, pseudo-random simulations of the a posteriori probability (showing the range of typical subsurface models), and marginals of this probability at selected depths in the subsurface. The models we compute are consistent both with the surface seismic data and the borehole measurements, even though the latter are well below the resolution of the former. We also contrast the Bayesian maximum a posteriori model with the Occam model, which is the smoothest model that fits the surface seismic data alone.

211 citations


Journal ArticleDOI
TL;DR: In this article, the optimal experimental design for determining the kinetic parameters of the model resulting from the Weibull probability density function was studied, by defining the sampling conditions that lead to a minimum confidence region of the estimates, for a number of observations equal to the number of parameters.

205 citations


Journal ArticleDOI
TL;DR: It is shown that the probability density function of the maximum signal-to-interference ratio (SIR) at the output of the optimum combiner has a Hotelling T/sup 2/ distribution.
Abstract: Optimum combining for space diversity reception is studied in digital cellular mobile radio communication systems with Rayleigh fading and multiple cochannel interferers. This paper considers binary phase-shift keying (BPSK) modulation in a flat Rayleigh-fading environment when the number of interferences L is no less than the number of antenna elements N(L/spl ges/N). The approach of this paper and its main contribution is to carry out the analysis in a multivariate framework. Using this approach and with the assumption of equal-power interferers, it is shown that the probability density function of the maximum signal-to-interference ratio (SIR) at the output of the optimum combiner has a Hotelling T/sup 2/ distribution. Closed form expressions using hypergeometric functions are derived for the outage probability and the average probability of bit error. Theoretical results are demonstrated by Monte Carlo simulations.

Patent
11 Nov 1998
TL;DR: In this article, a multidimensional index for nearest neighbor queries on a database of records has been proposed, which is based on first obtaining a statistical model of the content of the data in the form of a probability density function This density is then used to decide how data should be reorganized on disk for efficient nearest neighbour queries.
Abstract: Method and apparatus for efficiently performing nearest neighbor queries on a database of records wherein each record has a large number of attributes by automatically extracting a multidimensional index from the data The method is based on first obtaining a statistical model of the content of the data in the form of a probability density function This density is then used to decide how data should be reorganized on disk for efficient nearest neighbor queries At query time, the model decides the order in which data should be scanned It also provides the means for evaluating the probability of correctness of the answer found so far in the partial scan of data determined by the model In this invention a clustering process is performed on the database to produce multiple data clusters Each cluster is characterized by a cluster model The set of clusters represent a probability density function in the form of a mixture model A new database of records is built having an augmented record format that contains the original record attributes and an additional record attribute containing a cluster number for each record based on the clustering step The cluster model uses a probability density function for each cluster so that the process of augmenting the attributes of each record is accomplished by evaluating each record's probability with respect to each cluster Once the augmented records are used to build a database the augmented attribute is used as an index into the database so that nearest neighbor query analysis can be very efficiently conducted using an indexed look up process As the database is queried, the probability density function is used to determine the order clusters or database pages are scanned The probability density function is also used to determine when scanning can stop because the nearest neighbor has been found with high probability

Journal ArticleDOI
TL;DR: This paper is a tutorial on the calculation of error probabilities for fading channels through the evaluation of the Laplace transform Φ Δ (s) of the probability density function of the difference A between the metrics of two competing signal sequences.
Abstract: This paper is a tutorial on the calculation of error probabilities for fading channels.The method we advocate is centered on the evaluation of the Laplace transform Φ Δ (s) of the probability density function of the difference A between the metrics of two competing signal sequences. In some cases, knowledge of the function Φ Δ (s) allows one to determine error probabilities exactly in closed form, or asymptotically as the signal-to-noise ratio grows to infinity. The general technique that we describe here allows their computation in numerical form to any desired degree of accuracy. Coded as well as uncoded transmission can be considered in this unified framework. For illustration's sake, some examples of calculations are provided for frequency-flat slow-fading channels in which coherent detection is used, and the channel state information is perfect, or unavailable, or obtained from a noisy pilot tone.

Journal Article
TL;DR: In this article, a random walk model is proposed for space fractional diffusion, where the fundamental solutions of these generalized diffusion equations are shown to provide certain probability density functions, in space or time, which are related to the relevant class of stable distributions.
Abstract: FRACTIONAL calculus allows one to generalize the linear (one-dimensional) diffusion equation by replacing either the first time-derivative or the second space-derivative by a derivative of a fractional order. The fundamental solutions of these generalized diffusion equations are shown to provide certain probability density functions, in space or time, which are related to the relevant class of stable distributions. For the space fractional diffusion, a random-walk model is also proposed.

Journal ArticleDOI
TL;DR: It is shown that the optimal rate of convergence is simultaneously achieved for log-densities in Sobolev spaces W/sub 2//sup s/(U) without knowing the smoothness parameter s and norm parameter U in advance.
Abstract: Probability models are estimated by use of penalized log-likelihood criteria related to Akaike (1973) information criterion (AIC) and minimum description length (MDL). The accuracies of the density estimators are shown to be related to the tradeoff between three terms: the accuracy of approximation, the model dimension, and the descriptive complexity of the model classes. The asymptotic risk is determined under conditions on the penalty term, and is shown to be minimax optimal for some cases. As an application, we show that the optimal rate of convergence is simultaneously achieved for log-densities in Sobolev spaces W/sub 2//sup s/(U) without knowing the smoothness parameter s and norm parameter U in advance. Applications to neural network models and sparse density function estimation are also provided.

Journal ArticleDOI
TL;DR: The more general case where the two Gaussian noise processes describing the Rice process are correlated is considered, and the resulting process are named as extended Suzuki process, which can be used as a suitable statistical model for describing the fading behavior of large classes of frequency nonselective land mobile satellite channels.
Abstract: This paper deals with the statistical characterization of a stochastic process which is a product of a Rice and lognormal process. Thereby, we consider the more general case where the two Gaussian noise processes describing the Rice process are correlated. The resulting process are named as extended Suzuki process, which can be used as a suitable statistical model for describing the fading behavior of large classes of frequency nonselective land mobile satellite channels. In particular, the statistical properties (e.g., probability density function (pdf) of amplitude and phase, level-crossing rate, and average duration of fades) of the Rice process with cross-correlated components as well as of the proposed extended Suzuki process are investigated. Moreover, all statistical model parameters are optimized numerically to fit the cumulative distribution function and the level-crossing rate of the underlying analytical model to measured data collected in different environments. Finally, an efficient simulation model is presented which is in excellent conformity with the proposed analytical model.

Journal ArticleDOI
TL;DR: This work directly determines the error probability from the characteristic function of decision variables, resulting in closed-form solutions involving matrix differentiation in postdetection combining systems in an arbitrarily correlated Nakagami environment.
Abstract: Postdetection combining is a popular means to improve the bit error performance of DPSK and noncoherent FSK (NFSK) systems over fading channels. Nevertheless, the error performance of such systems in an arbitrarily correlated Nakagami environment is not available in the literature. The difficulty arises from inherent nonlinearity in noncoherent detection and from attempts to determine explicitly the probability density function of the total signal-to-noise ratio at the combiner output. We directly determine the error probability from the characteristic function of decision variables, resulting in closed-form solutions involving matrix differentiation. The performance calculation is further simplified by developing a recursive technique. The theory is illustrated by analyzing two feasible antenna arrays used in base stations for diversity reception, ending up with some findings of interest to system design.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the average monthly flow of the Salt River near Roosevelt, Arizona and compare the classical approach applied to the logarithms of the flow data to the alter- native heavy tail approach and demonstrate how the classical approaches seriously understates the probability of large fluctu- ations.
Abstract: Recent advances in time series analysis provide alternative models for river flows in which the innovations have heavy tails, so that some of the moments do not exist. The probability of large fluctuations is much larger than for standard models. We survey some recent theoretical developments for heavy tail time series models and illustrate their practical application to river flow data from the Salt River near Roosevelt, Arizona. We also include some simple diagnostics that the practitioner can use to identify when the methods of this paper may be useful. In this paper we will discuss the application of heavy tail models to hydrology. Since many river flow time series exhibit occasional sharp spikes, a model that captures this heavy tail characteristic is important in adequately describing the series. Typically, a time series with heavy tails is appropriately trans- formed so that normal asymptotics apply. We propose a new model that allows a more faithful representation of the river flow without preliminary transformations. As an application, we consider the average monthly flow of the Salt River near Roosevelt, Arizona. The Salt River flow series is periodically stationary; that is, its mean and covariance functions are peri- odic with respect to time. We fit a periodic autoregressive moving average (ARMA) model to the data without moment assumptions (Anderson and Meerschaert, 1997). We compare this model, which has stable asymptotics, to the classical model presented by Anderson and Vecchia (1993), which has normal asymptotics after log transforming the data, so that the inno- vations have finite fourth moment. Regarding the extreme value behavior of the models, we contrast the classical ap- proach applied to the logarithms of the flow data to the alter- native heavy tail approach and demonstrate how the classical approach seriously understates the probability of large fluctu- ations. In the concluding remarks of the paper we mention some simple diagnostics that the practitioner can use to iden- tify when the methods of this paper may be useful. We say that a probability distribution has heavy tails if the tails of the distribution diminish at an algebraic rate (like some power of x) rather than at an exponential rate. In this case some of the moments of this probability distribution will fail to exist. The kth moment of a probability distribution function F( x) with density f( x) is defined by

Journal ArticleDOI
TL;DR: In this article, the distribution of density as a function of position within the Earth is much less well constrained than the seismic velocities, and the present generation of density models has been constructed using linearized inversion techniques from earlier models.
Abstract: SUMMARY The distribution of density as a function of position within the Earth is much less well constrained than the seismic velocities. The primary information comes from the mass and moment of inertia of the Earth and this information alone requires that there be a concentration of mass towards the centre of the globe. Additional information is to be found in the frequencies of the graver normal modes of the Earth which are sensitive to density through self-gravitation eVects induced in deformation. The present generation of density models has been constructed using linearized inversion techniques from earlier models, which ultimately relate back to models developed by Bullen and based in large part on physical arguments. A number of experiments in non-linear inversion have been conducted using the PREM reference model, with fixed velocity and attenuation, but with the density model constrained to lie within fixed bounds on both density and density gradient. A set of models is constructed from a uniform probability density within the bound and slope constraints. Each of the resultant density models is tested against the mass and moment of inertia of the Earth, and for successful models a comparison is made with observed normal mode frequencies. From the misfit properties of the ensemble of models the robustness of the density profile in diVerent portions of the Earth can be assessed, which can help with the design of parametrization for future reference models. In both the lower mantle and the outer core it would be desirable to allow a more flexible representation than the single cubic polynomial employed in PREM.

Journal ArticleDOI
TL;DR: In this article, the generalized Langevin model is combined with a model for viscous transport, which provides exact treatment of viscous inhomogeneous effects, and enables consistent imposition of the no-slip condition in a particle framework.
Abstract: Probability density function (p.d.f.) methods are extended to include modelling of wall-bounded turbulent flows. A p.d.f. near-wall model is developed in which the generalized Langevin model is combined with a model for viscous transport. This provides exact treatment of viscous inhomogeneous effects, and enables consistent imposition of the no-slip condition in a particle framework. The method of elliptic relaxation is combined with additional boundary conditions and with the generalized Langevin model to provide an analogy for the near-wall fluctuating continuity equation. This provides adequate representation of the near-wall anisotropy of the Reynolds stresses. The model is implemented with a p.d.f./Monte Carlo simulation for the joint p.d.f. of velocity and turbulent frequency. Results are compared with DNS and experimental profiles for fully developed turbulent channel flow.

Journal ArticleDOI
TL;DR: In this article, the probability density distributions of one-minute values of global irradiance, conditioned to the optical air mass, considering those as an approximation to the instantaneous distributions, were analyzed.

Book ChapterDOI
TL;DR: It is shown that evolutionary algorithms are able to converge to the set of minimal elements in finite time with probability one, provided that the search space is finite, the time-invariant variation operator is associated with a positive transition probability function and that the selection operator obeys the so-called ‘elite preservation strategy.’
Abstract: The task of finding minimal elements of a partially ordered set is a generalization of the task of finding the global minimum of a real-valued function or of finding Pareto-optimal points of a multicriteria optimization problem. It is shown that evolutionary algorithms are able to converge to the set of minimal elements in finite time with probability one, provided that the search space is finite, the time-invariant variation operator is associated with a positive transition probability function and that the selection operator obeys the so-called ‘elite preservation strategy.’

Book ChapterDOI
01 Jan 1998
TL;DR: The solution method uses the concept of a p-Ievel efficient point (pLEP) intoduced by the first author (1990) and works in such a way that first all pLEP's are enumerated, then a cutting plane method does the rest of the job.
Abstract: The most important static stochastic programming models, that can be formulated in connection with a linear programming problem, where some of the right-hand side values are random variables, are: the simple recourse model, the probabilistic constrained model and the combination of the two. In this paper we present algorithmic solution to the second and third models under the assumption that the random variables have a discrete joint distribution. The solution method uses the concept of a p-level efficient point (pLEP) intoduced by the first author (1990) and works in such a way that first all pLEP’s are enumerated, then a cutting plane method does the rest of the job.

Journal ArticleDOI
TL;DR: In this article, a kinetic theory of fatigue surface cracking processes is presented, which takes fully into account the crack coalescence phenomenon and derives a balance equation for the crack density function in a one dimensional phase space similar to the Boltzmann equation for gases.
Abstract: This paper presents a kinetic theory of fatigue surface cracking processes that takes fully into account the crack coalescence phenomenon. We derive a balance equation for the crack density function in a one dimensional phase space similar to the Boltzmann equation for gases. The equation is solved numerically by a finite- difference method and the results are compared with a more classical Monte-Carlo simulation. The fatigue life probability distribution is calculated by assuming that failure occurs when cracks larger than a given critical size appear.

Journal ArticleDOI
TL;DR: In this article, a generalized diffusion equation resulting from an additive two-state process, in combination with an asymptotically fractal (asymptotic power-law) waiting-time distribution is discussed.
Abstract: We discuss a generalized diffusion equation resulting from an additive two-state process, in combination with an asymptotically fractal (asymptotic power-law) waiting-time distribution. The obtained equation is an extension to previously discussed fractional diffusion equations. Our description leads to a mean squared displacement which describes enhanced, subballistic transport for long times. The short time behavior, however, is of a ballistic nature. This separation into two domains results from the introduction of a time scale through the asymptotically fractal waiting-time distribution. This is also mirrored by the observation that, for small times, our generalized diffusion equation reduces to the standard Cattaneo equation. The asymptotic probability density is of compressed Gaussian type, and thus differs from the L\'evy tail generally found for these kinds of processes.

Journal ArticleDOI
TL;DR: In this article, the authors considered a dissipation field (square gradient) of a passive scalar advected by incompressible turbulence and showed that the PDF of the dissipation is a nonperturbative object with respect to the inverse Peclet number.
Abstract: Probability distribution of the gradients of turbulent fields is probably the most remarkable manifestation of the intermittency of developed turbulence and related strong non-Gaussianity. A typical plot of the logarithm of gradient’s probability density function (PDF) (which would be parabolic for Gaussian statistics) is concave rather than convex, with a strong central peak and slowly decaying tails. This is natural for an intermittent field since rare strong fluctuations are responsible for the tails, while large quiet regions are related to the central peak. In particular, such PDFs were observed for the dissipation field (square gradient) of passive scalar advected by incompressible turbulence which is the subject of the present paper. We consider scalar advection within the framework of the Kraichnan model assuming velocity field to be delta correlated in time [1]. Most of the rigorous results on turbulent mixing have been obtained so far with the help of that model which is likely to play in turbulence the role the Ising model played in critical phenomena. High-order moments of the scalar were treated hitherto by the perturbation theory around Gaussian limits. Clearly, the kind of strongly nonGaussian PDF observed for gradients cannot be treated by any perturbation theory that starts from a Gaussian statistics as zero approximation. Since we consider developed turbulence with large Peclet number Pe (measuring relative strength of advection with respect to diffusion at the pumping scale), it is tempting to use Pe 21 as a small parameter. Yet any attempt to treat diffusion term perturbatively is doomed to fail because the PDF of the dissipation is a nonperturbative object with respect to the inverse Peclet number: it is zero at efi 0and zero diffusivity yet has nonzero limits

Proceedings ArticleDOI
01 Jan 1998
TL;DR: An accurate and treatable Markov-based model for MPEG video traffic that is able to capture both the inter-GOP and the intra-GOP correlation and is used to evaluate the loss probability in an ATM multiplexer loaded by an MPEG video source and an aggregate of external traffic.
Abstract: An accurate and treatable Markov-based model for MPEG video traffic is proposed. Before reaching a definition of the model, a large number of MPEG video sequences are analyzed and their statistical characteristics are highlighted. Besides the well-known autocorrelation function, testifying the pseudo-periodicity of the sequence in the short term, and the gamma-shaped probability density function the correlation between different frames belonging to the same group of pictures (GOP) is also accurately studied. Then a structured model is proposed. Thanks to its two-level structure it is able to capture both the inter-GOP and the intra-GOP correlation. In order to obtain the first level of the model, the well-known inverse eigenvalue problem is solved in the discrete-time domain. Finally, the model is used to evaluate the loss probability in an ATM multiplexer loaded by an MPEG video source and an aggregate of external traffic. The accuracy of the model is evaluated by comparing the analytically obtained loss probability with the loss probability calculated by simulating the system using real traffic sequences as driven traces.

Journal ArticleDOI
TL;DR: In this paper, a simple procedure is presented for characterising the probability distribution function of the dwell time in a cellular telephony system and other statistical figures related to mobility are also provided.
Abstract: A simple procedure is presented for characterising the probability distribution function of the dwell time in a cellular telephony system. Other statistical figures related to mobility are also provided.

Journal ArticleDOI
Guo-Kang Er1
TL;DR: In this article, the probability density function (PDF) and the mean up-crossing rate (MCR) of the stationary responses of nonlinear stochastic systems excited by white noise are analyzed based on the assumption that the PDF of the responses is approximated with an exponential function of a polynomial in the state variables.
Abstract: The probability density function (PDF) and the mean up-crossing rate (MCR) of the stationary responses of nonlinear stochastic systems excited by white noise is analyzed based on the assumption that the PDF of the responses is approximated with an exponential function of a polynomial in the state variables. Based upon the approximate PDF, a new technique is developed for the approximate PDF solution of Fokker–Planck–Kolmogorov equation, and consequently, the MCR of the system responses is analyzed. Numerical results showed that the approximate PDFs and MCRs approach to the exact ones as the degree of the polynomial increases.

Journal ArticleDOI
TL;DR: In this article, a general multi-modal Weibull distribution with the procedure of unknown parameter estimation is presented, which can be used for every stationary process and for any shape of loading spectrum.