scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 2016"


Proceedings Article
15 Jun 2016
TL;DR: This article proposed a data transformation called inverse autoregressive flows (IAF) to transform a simple distribution over the latent variables into a much more flexible distribution, while still allowing us to compute the resulting variables' probability density function.
Abstract: We propose a simple and scalable method for improving the flexibility of variational inference through a transformation with autoregressive neural networks. Autoregressive neural networks, such as RNNs or the PixelCNN, are very powerful models and potentially interesting for use as variational posterior approximation. However, ancestral sampling in such networks is a long sequential operation, and therefore typically very slow on modern parallel hardware, such as GPUs. We show that by inverting autoregressive neural networks we can obtain equally powerful posterior models from which we can sample efficiently on modern hardware. We show that such data transformations, inverse autoregressive flows (IAF), can be used to transform a simple distribution over the latent variables into a much more flexible distribution, while still allowing us to compute the resulting variables' probability density function. The method is simple to implement, can be made arbitrarily flexible and, in contrast with previous work, is well applicable to models with high-dimensional latent spaces, such as convolutional generative models. The method is applied to a novel deep architecture of variational auto-encoders. In experiments with natural images, we demonstrate that autoregressive flow leads to significant performance gains.

767 citations


Journal ArticleDOI
TL;DR: The methods and design principles of flexsurv, an R package for fully-parametric modeling of survival data, are explained, giving several worked examples of its use.
Abstract: flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in 'Surv' objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use.

335 citations


Journal ArticleDOI
TL;DR: In this article, a unified performance analysis of a single-link free-space optical (FSO) link that accounts for pointing errors and both types of detection techniques is presented.
Abstract: In this work, we present a unified performance analysis of a free-space optical (FSO) link that accounts for pointing errors and both types of detection techniques [i.e., intensity modulation/direct detection (IM/DD) and heterodyne detection]. More specifically, we present unified exact closed-form expressions for the cumulative distribution function, the probability density function, the moment generating function, and the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system, all in terms of the Meijer’s G function except for the moments that is in terms of simple elementary functions. We then capitalize on these unified results to offer unified exact closed-form expressions for various performance metrics of FSO link transmission systems, such as the outage probability, the scintillation index (SI), the average error rate for binary and $M$ -ary modulation schemes, and the ergodic capacity (except for IM/DD technique, where we present closed-form lower bound results), all in terms of Meijer’s G functions except for the SI that is in terms of simple elementary functions. Additionally, we derive the asymptotic results for all the expressions derived earlier in terms of Meijer’s G function in the high SNR regime in terms of simple elementary functions via an asymptotic expansion of the Meijer’s G function. We also derive new asymptotic expressions for the ergodic capacity in the low as well as high SNR regimes in terms of simple elementary functions via utilizing moments. All the presented results are verified via computer-based Monte-Carlo simulations.

273 citations


Journal ArticleDOI
TL;DR: A new stability concept generalizing the traditional mean-square stability is proposed such that numerically testable criteria on the basis of SMK are obtained.
Abstract: This technical note is concerned with the problems of stability and stabilization for a class of discrete-time semi-Markov jump linear systems (S-MJLSs). The discrete-time semi-Markov kernel (SMK) is introduced, where the probability density function of sojourn-time is dependent on both current and next system mode. As a consequence, different types of distributions and/or different parameters in a same type of distribution of sojourn-time, depending on the target mode towards which the system jumps, can coexist in each mode of a SMK. The underlying S-MJLSs are therefore more general than those considered in existing studies. A new stability concept generalizing the traditional mean-square stability is proposed such that numerically testable criteria on the basis of SMK are obtained. Numerical examples are presented to illustrate the validity and advantage of the developed theoretical results.

245 citations


Journal ArticleDOI
TL;DR: In this paper, the authors extended the Koksma-Hlawka inequality to the case of non-uniform distributions and proposed a new strategy of representative point set determination via rearrangement, which is then applied to stochastic dynamical response analysis of strong nonlinear structures by incorporating into the probability density evolution method.

171 citations


Journal ArticleDOI
TL;DR: In this article, a new data assimilation approach based on the particle filter (PF) was proposed for nonlinear/non-Gaussian applications in geoscience, denoted the local PF, which extends the particle weights into vector quantities to reduce the influence of distant observations on the weight calculations via a localization function.
Abstract: This paper presents a new data assimilation approach based on the particle filter (PF) that has potential for nonlinear/non-Gaussian applications in geoscience. Particle filters provide a Monte Carlo approximation of a system’s probability density, while making no assumptions regarding the underlying error distribution. The proposed method is similar to the PF in that particles—also referred to as ensemble members—are weighted based on the likelihood of observations in order to approximate posterior probabilities of the system state. The new approach, denoted the local PF, extends the particle weights into vector quantities to reduce the influence of distant observations on the weight calculations via a localization function. While the number of particles required for standard PFs scales exponentially with the dimension of the system, the local PF provides accurate results using relatively few particles. In sensitivity experiments performed with a 40-variable dynamical system, the local PF require...

159 citations


Journal ArticleDOI
TL;DR: An approximate closed-form probability density function is derived for the composite gamma-gamma (GG) atmospheric turbulence with the pointing error model using the proposed approximation of the Beckmann distribution, which is valid for most practical terrestrial FSO links.
Abstract: A novel accurate and useful approximation of the well-known Beckmann distribution is presented here, which is used to model generalized pointing errors in the context of free-space optical (FSO) communication systems. We derive an approximate closed-form probability density function (PDF) for the composite gamma-gamma (GG) atmospheric turbulence with the pointing error model using the proposed approximation of the Beckmann distribution, which is valid for most practical terrestrial FSO links. This approximation takes into account the effect of the beam width, different jitters for the elevation and the horizontal displacement and the simultaneous effect of nonzero boresight errors for each axis at the receiver plane. Additionally, the proposed approximation allows us to delimit two different FSO scenarios. The first of them is when atmospheric turbulence is the dominant effect in relation to generalized pointing errors, and the second one when generalized pointing error is the dominant effect in relation to atmospheric turbulence. The second FSO scenario has not been studied in-depth by the research community. Moreover, the accuracy of the method is measured both visually and quantitatively using curve-fitting metrics. Simulation results are further included to confirm the analytical results.

119 citations


Journal ArticleDOI
TL;DR: A multidimensional, fast, and robust kernel density estimation is proposed: fastKDE, which exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster.

115 citations


Journal ArticleDOI
TL;DR: It is shown that one of the diffusing diffusivity models is equivalent to Brownian motion in the presence of a sink and a class of models for which it is possible to find analytical solutions is introduced.
Abstract: It has been found in many experiments that the mean square displacement of a Brownian particle x(T) diffusing in a rearranging environment is strictly Fickian, obeying ⟨(x(T))2⟩ ∝ T, but the probability distribution function for the displacement is not Gaussian. An explanation of this is that the diffusivity of the particle itself is changing as a function of time. Models for this diffusing diffusivity have been solved analytically in the limit of small time, but simulations were necessary for intermediate and large times. We show that one of the diffusing diffusivity models is equivalent to Brownian motion in the presence of a sink and introduce a class of models for which it is possible to find analytical solutions. Our solution gives ⟨(x(T))2⟩ ∝ T for all times and at short times the probability distribution function of the displacement is exponential which crosses over to a Gaussian in the limit of long times and large displacements.

113 citations


Journal ArticleDOI
TL;DR: An application of the proposed approach to a data set of the measured wind speed and power from an operational wind turbine in a wind farm in Taiwan is used to test the methodology and demonstrates that the proposed methodology works well for the multi-step ahead windspeed and power forecasting.

102 citations


Journal ArticleDOI
TL;DR: In this article, the suitability of different probability functions for estimating wind speed distribution at five stations, distributed in the east and southeast of Iran, was evaluated for the first time to estimate the distribution of wind speed.

Journal ArticleDOI
TL;DR: The derivation and analysis of mesoscopic (continuous-time random walk) equations for both jump and velocity models with stochastic resetting shows that stationary states emerge for any shape of the waiting-time and jump length distributions.
Abstract: It is known that introducing a stochastic resetting in a random-walk process can lead to the emergence of a stationary state. Here we study this point from a general perspective through the derivation and analysis of mesoscopic (continuous-time random walk) equations for both jump and velocity models with stochastic resetting. In the case of jump models it is shown that stationary states emerge for any shape of the waiting-time and jump length distributions. The existence of such state entails the saturation of the mean square displacement to an universal value that depends on the second moment of the jump distribution and the resetting probability. The transient dynamics towards the stationary state depends on how the waiting time probability density function decays with time. If the moments of the jump distribution are finite then the tail of the stationary distributions is universally exponential, but for Levy flights these tails decay as a power law whose exponent coincides with that from the jump distribution. For velocity models we observe that the stationary state emerges only if the distribution of flight durations has finite moments of lower order; otherwise, as occurs for Levy walks, the stationary state does not exist, and the mean square displacement grows ballistically or superdiffusively, depending on the specific shape of the distribution of movement durations.

Journal ArticleDOI
TL;DR: In this paper, a probabilistic optimal power flow (P-OPF) model with chance constraints that considers the uncertainties of wind power generation (WPG) and load is proposed.
Abstract: A novel probabilistic optimal power flow (P-OPF) model with chance constraints that considers the uncertainties of wind power generation (WPG) and load is proposed in this paper. An affine generation dispatch strategy is adopted to balance the system power uncertainty by several conventional generators, and thus the linear approximation of the cost function with respect to the power uncertainty is proposed to compute the quantile (which is also recognized as the value-at-risk) corresponding to a given probability value. The proposed model applies this quantile as the objective function and minimizes it to meet distinct probabilistic cost regulation purposes via properly selecting the given probability. In particular, the hedging effect due to the used affine generation dispatch is also thoroughly investigated. In addition, an analytical method to calculate probabilistic load flow (PLF) is developed with the probability density function of WPG, which is proposed to be approximated by a customized Gaussian mixture model whose parameters are easily obtained. Accordingly, it is successful to analytically compute the chance constraints on the transmission line power and the power outputs of conventional units. Numerical studies of two benchmark systems show the satisfactory accuracy of the PLF method, and the effectiveness of the proposed P-OPF model.

Proceedings ArticleDOI
03 May 2016
TL;DR: The experimental results show that salt attenuates the received signal while air bubbles mainly introduce severe intensity fluctuations, and a combination of an exponential and a log-normal distributions is proposed to perfectly describe the acquired data PDF for such regimes of scintillation index.
Abstract: In this paper, we experimentally investigate the statistical distribution of intensity fluctuations for underwater wireless optical channels under different channel conditions, namely fresh and salty underwater channels with and without air bubbles. To do so, we first measure the received optical signal with a large number of samples. Based on the normalized acquired data the channel coherence time and the fluctuations probability density function (PDF) are obtained for different channel scenarios. Our experimental results show that salt attenuates the received signal while air bubbles mainly introduce severe intensity fluctuations. Moreover, we observe that log-normal distribution precisely fits the acquired data PDF for scintillation index (σ2I) values less than 0.1, while Gamma-Gamma and K distributions aptly predict the intensity fluctuations for σ2I > 1. Since neither of these distributions are capable of predicting the received irradiance for 0.1 < σ2I < 1, we propose a combination of an exponential and a log-normal distributions to perfectly describe the acquired data PDF for such regimes of scintillation index.

Journal ArticleDOI
Jie Li1
TL;DR: In this article, a generalized probability density evolution equation (GPDEE) was derived to study the randomness propagation process in a physical system and a completely uncoupled partial differential equation was derived as well with respect to the evolutionary probability density function, which holds for any physical quantity of a probability dissipative system.

Journal ArticleDOI
01 Nov 2016-Energy
TL;DR: For quantifying uncertainty associated with power load and obtaining more information of future load, a probability density forecasting method based on quantile regression neural network using triangle kernel function (QRNNT) is proposed.

Journal ArticleDOI
TL;DR: In this paper, a new class of continuous distributions with an extra positive parameter called the type I half-logistic family is studied. But the asymptotics and shapes of these distributions are not investigated.
Abstract: We study general mathematical properties of a new class of continuous distributions with an extra positive parameter called the type I half-logistic family. We present some special models and investigate the asymptotics and shapes. The new density function can be expressed as a linear combination of exponentiated densities based on the same baseline distribution. We derive a power series for the quantile function. Explicit expressions for the ordinary and incomplete moments, quantile and generating functions, Bonferroni and Lorenz curves, Shannon and Renyi entropies and order statistics are determined. We introduce a bivariate extension of the new family. We discuss the estimation of the model parameters by maximum likelihood and illustrate its potentiality by means of two applications to real data.

Journal ArticleDOI
TL;DR: This work provides a detailed mathematical treatment of the effect of temporal noise correlations on the outcomes of RB protocols and provides a fully analytic framework capturing the accumulation of error in RB expressed in terms of a three-dimensional random walk in "Pauli space."
Abstract: Among the most popular and well-studied quantum characterization, verification, and validation techniques is randomized benchmarking (RB), an important statistical tool used to characterize the performance of physical logic operations useful in quantum information processing. In this work we provide a detailed mathematical treatment of the effect of temporal noise correlations on the outcomes of RB protocols. We provide a fully analytic framework capturing the accumulation of error in RB expressed in terms of a three-dimensional random walk in "Pauli space." Using this framework we derive the probability density function describing RB outcomes (averaged over noise) for both Markovian and correlated errors, which we show is generally described by a Γ distribution with shape and scale parameters depending on the correlation structure. Long temporal correlations impart large nonvanishing variance and skew in the distribution towards high-fidelity outcomes - consistent with existing experimental data - highlighting potential finite-sampling pitfalls and the divergence of the mean RB outcome from worst-case errors in the presence of noise correlations. We use the filter-transfer function formalism to reveal the underlying reason for these differences in terms of effective coherent averaging of correlated errors in certain random sequences. We conclude by commenting on the impact of these calculations on the utility of single-metric approaches to quantum characterization, verification, and validation.

Journal ArticleDOI
TL;DR: The presented theory exploits the concept of probability density function f(z|ρ, t) for the maximum depth reached by the photons that are eventually re-emitted from the surface of the medium at distance ρ and time t to describe the penetration depth of light in random media.
Abstract: We propose a comprehensive statistical approach describing the penetration depth of light in random media. The presented theory exploits the concept of probability density function f(z|ρ, t) for the maximum depth reached by the photons that are eventually re-emitted from the surface of the medium at distance ρ and time t. Analytical formulas for f, for the mean maximum depth 〈zmax〉 and for the mean average depth reached by the detected photons at the surface of a diffusive slab are derived within the framework of the diffusion approximation to the radiative transfer equation, both in the time domain and the continuous wave domain. Validation of the theory by means of comparisons with Monte Carlo simulations is also presented. The results are of interest for many research fields such as biomedical optics, advanced microscopy and disordered photonics.

Journal ArticleDOI
TL;DR: In this paper, the Schroźdinger bridge problem (SBP) is viewed as a stochastic regularization of OMT and can be cast as a control problem of steering the probability density of the state vector of a dynamical system between two marginals.
Abstract: Monge--Kantorovich optimal mass transport (OMT) provides a blueprint for geometries in the space of positive densities---it quantifies the cost of transporting a mass distribution into another. In particular, it provides natural options for interpolation of distributions (displacement interpolation) and for modeling flows. As such it has been the cornerstone of recent developments in physics, probability theory, image processing, time-series analysis, and several other fields. In spite of extensive work and theoretical developments, the computation of OMT for large-scale problems has remained a challenging task. An alternative framework for interpolating distributions, rooted in statistical mechanics and large deviations, is that of the Schroźdinger bridge problem (SBP), which leads to entropic interpolation. SBP may be seen as a stochastic regularization of OMT, and can be cast as the stochastic control problem of steering the probability density of the state-vector of a dynamical system between two marginals. The actual computation of entropic flows, however, has received hardly any attention. In our recent work on Schroźdinger bridges for Markov chains and quantum channels, we showed that the solution can be efficiently obtained from the fixed point of a map which is contractive in the Hilbert metric. Thus, the purpose of this paper is to show that a similar approach can be taken in the context of diffusion processes which (i) leads to a new proof of a classical result on SBP and (ii) provides an efficient computational scheme for both SBP and OMT. We illustrate this new computational approach by obtaining interpolation of densities in representative examples such as interpolation of images.

Journal ArticleDOI
TL;DR: An end-to-end performance analysis of dual-hop project-and-forward relaying in a realistic scenario, where the source-relay and the relay-destination links are experiencing MIMO-pinhole and Rayleigh channel conditions, respectively.
Abstract: In this letter, we present an end-to-end performance analysis of dual-hop project-and-forward relaying in a realistic scenario, where the source-relay and the relay-destination links are experiencing MIMO-pinhole and Rayleigh channel conditions, respectively. We derive the probability density function of both the relay postprocessing and the end-to-end signal-to-noise ratios, and the obtained expressions are used to derive the outage probability of the analyzed system as well as its end-to-end ergodic capacity in terms of generalized functions. Applying then the residue theory to Mellin–Barnes integrals, we infer the system asymptotic behavior for different channel parameters. As the bivariate Meijer-G function is involved in the analysis, we propose a new and fast MATLAB implementation enabling an automated definition of the complex integration contour. Extensive Monte-Carlo simulations are invoked to corroborate the analytical results.

Journal ArticleDOI
TL;DR: This paper aims to discuss the logistic equation subject to uncertainties in two parameters: the environmental carrying capacity, K, and the initial population density, N 0, and provides closed-form results for the first probability density function of time-population density.

Journal ArticleDOI
TL;DR: In this paper, a privacy-constrained information extraction problem is considered where for a pair of correlated discrete random variables governed by a given joint distribution, an agent observes Y and wants to convey to a potentially public user as much information about Y as possible while limiting the amount of information revealed about X.
Abstract: A privacy-constrained information extraction problem is considered where for a pair of correlated discrete random variables (X,Y) governed by a given joint distribution, an agent observes Y and wants to convey to a potentially public user as much information about Y as possible while limiting the amount of information revealed about X. To this end, the so-called rate-privacy function is investigated to quantify the maximal amount of information (measured in terms of mutual information) that can be extracted from Y under a privacy constraint between X and the extracted information, where privacy is measured using either mutual information or maximal correlation. Properties of the rate-privacy function are analyzed and its information-theoretic and estimation-theoretic interpretations are presented for both the mutual information and maximal correlation privacy measures. It is also shown that the rate-privacy function admits a closed-form expression for a large family of joint distributions of (X,Y). Finally, the rate-privacy function under the mutual information privacy measure is considered for the case where (X,Y) has a joint probability density function by studying the problem where the extracted information is a uniform quantization of Y corrupted by additive Gaussian noise. The asymptotic behavior of the rate-privacy function is studied as the quantization resolution grows without bound and it is observed that not all of the properties of the rate-privacy function carry over from the discrete to the continuous case.

Journal ArticleDOI
TL;DR: The early time regime of the Kardar-Parisi-Zhang (KPZ) equation in 1+1 dimensions in curved (or droplet) geometry is considered and the probability distribution function Φ_{drop}(H) is surprisingly reminiscent of the large deviation function describing the stationary fluctuations of finite-size models belonging to the KPZ universality class.
Abstract: We consider the early time regime of the Kardar-Parisi-Zhang (KPZ) equation in 1+1 dimensions in curved (or droplet) geometry. We show that for short time t, the probability distribution P(H,t) of the height H at a given point x takes the scaling form P(H,t)∼exp[-Φ_{drop}(H)/sqrt[t]] where the rate function Φ_{drop}(H) is computed exactly for all H. While it is Gaussian in the center, i.e., for small H, the probability distribution function has highly asymmetric non-Gaussian tails that we characterize in detail. This function Φ_{drop}(H) is surprisingly reminiscent of the large deviation function describing the stationary fluctuations of finite-size models belonging to the KPZ universality class. Thanks to a recently discovered connection between the KPZ equation and free fermions, our results have interesting implications for the fluctuations of the rightmost fermion in a harmonic trap at high temperature and the full counting statistics at the edge.

Journal ArticleDOI
TL;DR: In this paper, a new random vibration theory for the coupled train-bridge systems is proposed, and the probability density evolution method (PDEM) is employed to calculate the random dynamic vibration of the three-dimensional (3D) train-bridges system by a program compiled on the MATLAB® software platform.

Journal ArticleDOI
TL;DR: In this article, the robust recursive algorithm for output error models with time-varying parameters is proposed and the convergence property of the proposed robust algorithm is analyzed using the methodology of an associated ordinary differential equation system.
Abstract: Summary Intensive research in the field of mathematical modeling of pneumatic servo drives has shown that their mathematical models are nonlinear in which many important details cannot be included in the model. Owing the influence of the combination of heat coefficient, unknown discharge coefficient, and change of temperature, it was supposed that parameters of the pneumatic cylinder are random (stochastic parameters). On the other side, it has been well known that the nonlinear model can be approximated by a linear model with time-varying parameters. Due to the aforementioned reasons, it can be assumed that the pneumatic cylinder model is a linear stochastic model with variable parameters. In practical conditions, in measurements, there are rare, inconsistent observations with the largest part of population of observations (outliers). Therefore, synthesis of robust algorithms is of primary interest. In this paper, the robust recursive algorithm for output error models with time-varying parameters is proposed. The convergence property of the proposed robust algorithm is analyzed using the methodology of an associated ordinary differential equation system. Because ad hoc selection of model orders leads to overparameterization or parsimony problem, the robust Akaike's criterion is proposed to overcome these problems. By determining the least favorable probability density for a given class of probability distribution represents a base for design of the robust version of Akaike's criterion. The behavior of the proposed robust identification algorithm is considered through intensive simulations that demonstrate the superiority of the robust algorithm in relation to the linear algorithms (derived under an assumption that the stochastic disturbance has a Gaussian distribution). The good practical values of the proposed robust algorithm to identification of the pneumatic cylinder are illustrated by experimental results. Copyright © 2016 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, an extreme learning machine (ELM) is employed to compute the shape (k) and scale (c) factors of Weibull distribution function, and the developed ELM model is trained and tested based upon two widely successful methods used to estimate k and c parameters.
Abstract: The knowledge of the probabilistic wind speed distribution is of particular significance in reliable evaluation of the wind energy potential and effective adoption of site specific wind turbines Among all proposed probability density functions, the two-parameter Weibull function has been extensively endorsed and utilized to model wind speeds and express wind speed distribution in various locations In this research work, extreme learning machine (ELM) is employed to compute the shape (k) and scale (c) factors of Weibull distribution function The developed ELM model is trained and tested based upon two widely successful methods used to estimate k and c parameters The efficiency and accuracy of ELM is compared against support vector machine, artificial neural network and genetic programming for estimating the same Weibull parameters The survey results reveal that applying ELM approach is eventuated in attaining further precision for estimation of both Weibull parameters compared to other methods evaluated Mean absolute percentage error, mean absolute bias error and root mean square error for k are 84600 %, 01783 and 02371, while for c are 02143 %, 00118 and 00192 m/s, respectively In conclusion, it is conclusively found that application of ELM is particularly promising as an alternative method to estimate Weibull k and c factors

Journal ArticleDOI
TL;DR: This work demonstrates that internal PDFs are built during visual search and shows how they can be assessed with repetition and role-reversal effects, and indicates that observers learn properties of distractor distributions over and above mean and variance.

Journal ArticleDOI
TL;DR: It is uncovered that the roughness can enhance significantly the first escape of a particle from themiddle well, especially for different skewness parameters, but weak differences are found for stability index and noise intensity on the probabilities a particle staying in the middle well and splitting probability to the right.
Abstract: Rough energy landscape and noisy environment are two common features in many subjects, such as protein folding Due to the wide findings of bursting or spiking phenomenon in biology science, small diffusions mixing large jumps are adopted to model the noisy environment that can be properly described by Levy noise We combine the Levy noise with the rough energy landscape, modeled by a potential function superimposed by a fast oscillating function, and study the transport of a particle in a rough triple-well potential excited by Levy noise, rather than only small perturbations The probabilities of a particle staying in the middle well are considered under different amplitudes of roughness to find out how roughness affects the steady-state probability density function Variations in the mean first passage time from the middle well to the right well have been investigated with respect to Levy parameters and amplitudes of the roughness In addition, we have examined the influences of roughness on the splitting probabilities of the first escape from the middle well We uncover that the roughness can enhance significantly the first escape of a particle from the middle well, especially for different skewness parameters, but weak differences are found for stability index and noise intensity on the probabilities a particle staying in the middle well and splitting probability to the right

Journal ArticleDOI
TL;DR: In this paper, a stochastic model is presented for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas, where the fluctuations are modeled by a superposition of uncorrelated pulses with fixed shape and duration, describing radial motion of blob-like structures.
Abstract: A stochastic model is presented for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas. The fluctuations in the plasma density are modeled by a super-position of uncorrelated pulses with fixed shape and duration, describing radial motion of blob-like structures. In the case of an exponential pulse shape and exponentially distributed pulse amplitudes, predictions are given for the lowest order moments, probability density function, auto-correlation function, level crossings, and average times for periods spent above and below a given threshold level. Also, the mean squared errors on estimators of sample mean and variance for realizations of the process by finite time series are obtained. These results are discussed in the context of single-point measurements of fluctuations in the scrape-off layer, broad density profiles, and implications for plasma–wall interactions due to the transient transport events in fusion grade plasmas. The results may also have wide applications for modelling fluctuations in other magnetized plasmas such as basic laboratory experiments and ionospheric irregularities.