scispace - formally typeset
Search or ask a question
Topic

Probability density function

About: Probability density function is a research topic. Over the lifetime, 22321 publications have been published within this topic receiving 422885 citations. The topic is also known as: probability function & PDF.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a distribution dependent stochastic differential equation (DDSDE) is used to solve a non-linear PDE with a nonlinear semigroup P t ∗ on the space of probability measures.

174 citations

Journal ArticleDOI
TL;DR: In this article, a theory of extreme deviations is developed, devoted to the far tail of the pdf of the sum X of a finite number n of independent random variables with a common pdf e-f(x).
Abstract: Stretched exponential probability density functions (pdf), having the form of the exponential of minus a fractional power of the argument, are commonly found in turbulence and other areas They can arise because of an underlying random multiplicative process For this, a theory of extreme deviations is developed, devoted to the far tail of the pdf of the sum X of a finite number n of independent random variables with a common pdf e-f(x) The function f(x) is chosen (i) such that the pdf is normalized and (ii) with a strong convexity condition that $f''(x)>0$ and that $x^2 f''(x)\rightarrow +\infty$ for |x|→∞ additional technical conditions ensure the control of the variations of $f''(x)$ The tail behavior of the sum comes then mostly from individual variables in the sum all close to X/n and the tail of the pdf is ∼e-nf(X/n) This theory is then applied to products of independent random variables, such that their logarithms are in the above class, yielding usually stretched exponential tails An application to fragmentation is developed and compared to data from fault gouges The pdf by mass is obtained as a weighted superposition of stretched exponentials, reflecting the coexistence of different fragmentation generations For sizes near and above the peak size, the pdf is approximately log-normal, while it is a power law for the smaller fragments, with an exponent which is a decreasing function of the peak fragment size The anomalous relaxation of glasses can also be rationalized using our result together with a simple multiplicative model of local atom configurations Finally, we indicate the possible relevance to the distribution of small-scale velocity increments in turbulent flow

174 citations

Journal ArticleDOI
TL;DR: A particle-based nonlinear filtering scheme, related to recent work on chainless Monte Carlo, designed to focus particle paths sharply so that fewer particles are required.
Abstract: We present a particle-based nonlinear filtering scheme, related to recent work on chainless Monte Carlo, designed to focus particle paths sharply so that fewer particles are required. The main features of the scheme are a representation of each new probability density function by means of a set of functions of Gaussian variables (a distinct function for each particle and step) and a resampling based on normalization factors and Jacobians. The construction is demonstrated on a standard, ill-conditioned test problem.

173 citations

Journal ArticleDOI
TL;DR: In this article, the eigenvector dimension reduction (EDR) method was proposed for probability analysis that makes a significant improvement based on univariate dimension reduction method for estimating statistical moments of mildly nonlinear system responses in engineering applications.
Abstract: This paper presents the eigenvector dimension reduction (EDR) method for probability analysis that makes a significant improvement based on univariate dimension reduction (DR) method. It has been acknowledged that the DR method is accurate and efficient for assessing statistical moments of mildly nonlinear system responses in engineering applications. However, the recent investigation on the DR method has found difficulties of instability and inaccuracy for highly nonlinear system responses while maintaining reasonable efficiency. The EDR method integrates the DR method with three new technical components: (1) eigenvector sampling, (2) one-dimensional response approximation, and (3) a stabilized Pearson system. First, 2N+1 and 4N+1 eigenvector sampling schemes are proposed to resolve correlated and asymmetric random input variables. The eigenvector samples are chosen along the eigenvectors of the covariance matrix of random parameters. Second, the stepwise moving least squares (SMLS) method is proposed to accurately construct approximate system responses along the eigenvectors with the response values at the eigenvector samples. Then, statistical moments of the responses are estimated through recursive numerical integrations. Third, the stabilized Pearson system is proposed to predict probability density functions (PDFs) of the responses while eliminating singular behavior of the original Pearson system. Results for some numerical and engineering examples indicate that the EDR method is a very accurate, efficient, and stable probability analysis method in estimating PDFs, component reliabilities, and qualities of system responses.

172 citations

10 Dec 1963
TL;DR: The behavior of the lower bound of the average normalized error for fixed variables is examined and the minimum error when there is a large number of representation points is demonstrated, which takes into account the entropy rate of the discrete quantizing sequence.
Abstract: : The approximation generated by a specific but arbitrary partition and a specific but arbitrary set of representation points is examined. A statistical measure of error is investigated. The size of the error is measured by some moment of the random variable. Such a measure involves both the geometrical properties of the partition ing in the k-dimensional space, and the probabil ity density. The approximation of the random variable is accomplished by selecting the representation points at random, independently of one another, and according to some given density function and then constructing the parti tion. In Chapter 2 we shall examine the behavior of the lower bound of the average normalized error for fixed variables is examined. The behavior of the minimum error when there is a large number of representation points is demonstrated. This extension takes into account the entropy rate of the discrete quantizing sequence. (Author)

172 citations


Network Information
Related Topics (5)
Nonlinear system
208.1K papers, 4M citations
88% related
Monte Carlo method
95.9K papers, 2.1M citations
87% related
Estimator
97.3K papers, 2.6M citations
86% related
Optimization problem
96.4K papers, 2.1M citations
85% related
Artificial neural network
207K papers, 4.5M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023382
2022906
2021906
20201,047
20191,117
20181,083