scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 2020"


Journal ArticleDOI
TL;DR: It is shown that RC–ESN substantially outperforms ANN and RNN–LSTM for short-term predictions, e.g., accurately forecasting the chaotic trajectories for hundreds of numerical solver's time steps equivalent to several Lyapunov timescales.
Abstract: . In this paper, the performance of three machine-learning methods for predicting short-term evolution and for reproducing the long-term statistics of a multiscale spatiotemporal Lorenz 96 system is examined. The methods are an echo state network (ESN, which is a type of reservoir computing; hereafter RC–ESN), a deep feed-forward artificial neural network (ANN), and a recurrent neural network (RNN) with long short-term memory (LSTM; hereafter RNN–LSTM). This Lorenz 96 system has three tiers of nonlinearly interacting variables representing slow/large-scale ( X ), intermediate ( Y ), and fast/small-scale ( Z ) processes. For training or testing, only X is available; Y and Z are never known or used. We show that RC–ESN substantially outperforms ANN and RNN–LSTM for short-term predictions, e.g., accurately forecasting the chaotic trajectories for hundreds of numerical solver's time steps equivalent to several Lyapunov timescales. The RNN–LSTM outperforms ANN, and both methods show some prediction skills too. Furthermore, even after losing the trajectory, data predicted by RC–ESN and RNN–LSTM have probability density functions (pdf's) that closely match the true pdf – even at the tails. The pdf of the data predicted using ANN, however, deviates from the true pdf. Implications, caveats, and applications to data-driven and data-assisted surrogate modeling of complex nonlinear dynamical systems, such as weather and climate, are discussed.

142 citations


Posted Content
TL;DR: This work provides a theoretical sufficient criterion showing that the distribution generated by equivariant normalizing flows is invariant with respect to these symmetries by design, and proposes building blocks for flows which preserve symmetry which are usually found in physical/chemical many-body particle systems.
Abstract: Normalizing flows are exact-likelihood generative neural networks which approximately transform samples from a simple prior distribution to samples of the probability distribution of interest. Recent work showed that such generative models can be utilized in statistical mechanics to sample equilibrium states of many-body systems in physics and chemistry. To scale and generalize these results, it is essential that the natural symmetries in the probability density -- in physics defined by the invariances of the target potential -- are built into the flow. We provide a theoretical sufficient criterion showing that the distribution generated by \textit{equivariant} normalizing flows is invariant with respect to these symmetries by design. Furthermore, we propose building blocks for flows which preserve symmetries which are usually found in physical/chemical many-body particle systems. Using benchmark systems motivated from molecular physics, we demonstrate that those symmetry preserving flows can provide better generalization capabilities and sampling efficiency.

90 citations


Journal ArticleDOI
TL;DR: The results demonstrate that the GPDEM shows promise as an approach that can reliably analyze strongly nonlinear structures, such as earth-rockfill dams and other geotechnical engineering structures.

85 citations


Journal ArticleDOI
TL;DR: The effectiveness of the proposed deep mixture model in characterizing predicted PDFs is demonstrated through comparison with kernel density estimation, Monte Carlo dropout, a combined probabilistic load forecasting method and the proposed MDN without adversarial training.
Abstract: This paper proposes a direct model for conditional probability density forecasting of residential loads, based on a deep mixture network. Probabilistic residential load forecasting can provide comprehensive information about future uncertainties in demand. An end-to-end composite model comprising convolution neural networks (CNNs) and gated recurrent unit (GRU) is designed for probabilistic residential load forecasting. Then, the designed deep model is merged into a mixture density network (MDN) to directly predict probability density functions (PDFs). In addition, several techniques, including adversarial training, are presented to formulate a new loss function in the direct probabilistic residential load forecasting (PRLF) model. Several state-of-the-art deep and shallow forecasting models are also presented in order to compare the results. Furthermore, the effectiveness of the proposed deep mixture model in characterizing predicted PDFs is demonstrated through comparison with kernel density estimation, Monte Carlo dropout, a combined probabilistic load forecasting method and the proposed MDN without adversarial training.

79 citations


Journal ArticleDOI
TL;DR: The Lyapunov–Krasovskii functional is constructed with the distributed kernel to make full use of the delay probability distribution, and sufficient conditions for ensuring the stability of the closed-loop system with prescribed inline-formula performance are formulated in linear matrix inequalities.
Abstract: This article contributes to design an event-triggered $H_{\infty }$ controller for networked control systems with network channel delay. First, the network channel delay is modeled as a distributed delay with a probability density function as its kernel. Then, the closed-loop event-triggered control system is established as a distributed delay system. To make full use of the delay probability distribution, the Lyapunov–Krasovskii functional is constructed with the distributed kernel. By applying the Lyapunov method, sufficient conditions for ensuring the stability of the closed-loop system with prescribed $H_{\infty }$ performance are formulated in linear matrix inequalities. A numerical example shows that the proposed method is less conservative than some existing results considering delay distribution.

73 citations


Journal ArticleDOI
TL;DR: Experimental results show that the values of both CDF and PDF can be precisely estimated by the proposed deep learning method.
Abstract: In order to generate a probability density function (PDF) for fitting the probability distributions of practical data, this study proposes a deep learning method which consists of two stages: (1) a training stage for estimating the cumulative distribution function (CDF) and (2) a performing stage for predicting the corresponding PDF. The CDFs of common probability distributions can be utilised as activation functions in the hidden layers of the proposed deep learning model for learning actual cumulative probabilities, and the differential equation of the trained deep learning model can be used to estimate the PDF. Numerical experiments with single and mixed distributions are conducted to evaluate the performance of the proposed method. The experimental results show that the values of both CDF and PDF can be precisely estimated by the proposed method.

73 citations


Journal ArticleDOI
25 May 2020
TL;DR: This work presents an accurate approximation and upper bounds for the bit error rate of the probability distribution function of the channel fading between a base station, an array of intelligent reflecting elements, known as large intelligent surfaces (LIS), and a single-antenna user.
Abstract: In this work, we investigate the probability distribution function of the channel fading between a base station, an array of intelligent reflecting elements, known as large intelligent surfaces (LIS), and a single-antenna user. We assume that both fading channels, i.e., the channel between the base station and the LIS, and the channel between the LIS and the single user are Nakagami- $m$ distributed. Additionally, we derive the exact bit error probability considering quadrature amplitude ( $M$ -QAM) and binary phase-shift keying (BPSK) modulations when the number of LIS elements, $n$ , is equal to 2 and 3. We assume that the LIS can perform phase adjustment, but there is a residual phase error modeled by a Von Mises distribution. Based on the central limit theorem, and considering a large number of reflecting elements, we also present an accurate approximation and upper bounds for the bit error rate. Through several Monte Carlo simulations, we demonstrate that all derived expressions perfectly match the simulated results.

68 citations


Journal ArticleDOI
TL;DR: In this article, the authors analyzed the characteristic of wind speed data in Al-Salman site, Iraq using Weibull distribution and maximum likelihood method (MLM) was used to find out two essential wind speed parameters.

65 citations


Journal ArticleDOI
01 Sep 2020
TL;DR: In this article, a two-parameter extension of generalized Riemann-Liouville fractional integral integral inequalities is presented, where the expectation and variance of a continuous random variable is estimated by employing generalized RLF integral operators.
Abstract: In statistical analysis, oftentimes a probability density function is used to describe the relationship between certain unknown parameters and measurements taken to learn about them. As soon as there is more than enough data collected to determine a unique solution for the parameters, an estimation technique needs to be applied such as "fractional calculus", for instance, which turns out to be optimal under a wide range of criteria. In this context, we aim to present some novel estimates based on the expectation and variance of a continuous random variable by employing generalized Riemann-Liouville fractional integral operators. Besides, we obtain a two-parameter extension of generalized Riemann-Liouville fractional integral inequalities, and provide several modifications in the Riemann-Liouville and classical sense. Our ideas and obtained results my stimulate further research in statistical analysis.

64 citations


Journal ArticleDOI
TL;DR: In this article, a new day-ahead (24h) short-term load probability density forecasting hybrid method with a decomposition-based quantile regression forest is proposed, which can more accurately reflect the uncertainty of power grid load.

61 citations


Journal ArticleDOI
TL;DR: In this article, a non-inclusive Chebyshev metamodel (CMM) is implemented on deterministic analysis using discrete singular convolution (DSC) method with excellent computational efficiency and accuracy, which is used to obtain both deterministic and probabilistic results including probability density functions, cumulative density functions (CDFs), means and standard deviations of the critical buckling load.

Journal ArticleDOI
TL;DR: In this paper, it was shown that large deviations of the number of steps of a spreading random walker can lead to exponential decay of the density function of the random walkers, with logarithmic corrections.
Abstract: Brownian motion is a Gaussian process described by the central limit theorem. However, exponential decays of the positional probability density function $P(X,t)$ of packets of spreading random walkers, were observed in numerous situations that include glasses, live cells, and bacteria suspensions. We show that such exponential behavior is generally valid in a large class of problems of transport in random media. By extending the large deviations approach for a continuous time random walk, we uncover a general universal behavior for the decay of the density. It is found that fluctuations in the number of steps of the random walker, performed at finite time, lead to exponential decay (with logarithmic corrections) of $P(X,t)$. This universal behavior also holds for short times, a fact that makes experimental observations readily achievable.

Journal ArticleDOI
TL;DR: A new computational method to evaluate comprehensively the positional accuracy reliability for single coordinate, single point, multipoint and trajectory accuracy of industrial robots is proposed using the sparse grid numerical integration method and the saddlepoint approximation method.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the local asymptotic stability of the endemic and disease-free equilibria of the deterministic model and showed that there is a critical value R 0 s which can determine the extinction and the persistence in the mean of the disease.
Abstract: In this paper, we study the dynamical behaviors of a SVI epidemic model with half saturated incidence rate. Firstly, the local asymptotic stability of the endemic and disease-free equilibria of the deterministic model are studied. Then for stochastic model, we show that there is a critical value R 0 s which can determine the extinction and the persistence in the mean of the disease. Furthermore, by constructing a series of suitable Lyapunov functions, we prove that if R 0 s > 1 , then there exists an ergodic stationary distribution to the stochastic SVI model. It is worth mentioning that we obtain an exact expression of the probability density function of the stochastic model around the unique endemic equilibrium of the deterministic system by solving the corresponding Fokker-Planck equation, which is guaranteed by a new critical R ^ 0 s . Finally, some numerical simulations illustrate the analytical results.

Journal ArticleDOI
TL;DR: This paper proposes a novel and easily implemented approach to combine density probabilistic load forecasts to further improve the performance of the final Probabilistic forecasts.
Abstract: Researchers have proposed various probabilistic load forecasting models in the form of quantiles, densities, or intervals to describe the uncertainties of future energy demand. Density forecasts can provide more uncertainty information than can be expressed by just the quantile and interval. However, the combining method for density forecasts is seldom investigated. This paper proposes a novel and easily implemented approach to combine density probabilistic load forecasts to further improve the performance of the final probabilistic forecasts. The combination problem is formulated as an optimization problem to minimize the continuous ranked probability score of the combined model by searching the weights of different individual methods. Under the Gaussian mixture distribution assumption of the density forecasts, the problem is cast to a linearly constrained quadratic programming problem and can be solved efficiently. Case studies on the electric load datasets of eight areas verify the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: In this article, a multiscale unified gas-kinetic wave-particle (UGKWP) method is proposed for the simulation of hypersonic flow in all regimes.

Journal ArticleDOI
TL;DR: Analytically the influences of random fluctuations on a two-degrees-of-freedom (TDOF) airfoil model with viscoelastic terms are explored, indicating that the external random fluctuations have a remarkable influence on dynamics of the TDOF airfoilsmodel with vis coelastic material property.

Journal ArticleDOI
TL;DR: The purpose of this study is to consider the uncertainties in a short-term optimal operation model and obtain for each state variable, the probability density function of the operation process of the next day and show that the proposed model and method are practical and effective.

Journal ArticleDOI
TL;DR: A novel generalized density/distribution fitting method (GDFM) combining with the Copula function to establish a joint probability model for wind power generation and a fast Cumulant method (FCM) is proposed to reduce the computational burden of output cumulant calculation while retaining a high accuracy in a nonlinear context.
Abstract: Currently, the increasing wind power penetration, with consequent randomness and variability, presents great challenges to power system planning and operation. Probabilistic power flow (PPF) has been developed to calculate the power flow under uncertain circumstances. However, the current wind power models are subject to specific probability distributions, limiting their accuracies in wider applications. Additionally, the cumulant method (CM)-based PPF, if nonlinear relationship is considered in, would face an impractically high computational complexity. To address these problems in modeling and cumulant calculation, this article proposes a novel generalized density/distribution fitting method (GDFM) combining with the Copula function to establish a joint probability model for wind power generation. A special impulse- mixed probability density (IMPD) integration method is also introduced to derive the input cumulants from the model. Finally, a fast cumulant method (FCM) is proposed to reduce the computational burden of output cumulant calculation while retaining a high accuracy in a nonlinear context. Case study on the IEEE-118 test system validates the effectiveness of the proposed methods, and a real application to a provincial power grid in China provides some useful power flow risk information for decision making. The whole FCM-based PPF scheme can be helpful for future power flow examination in power system planning and operation.

Journal ArticleDOI
28 Apr 2020
TL;DR: In this paper, it is shown that finite mixture models can approximate any other probability density function (pdf) to an arbitrary degree of accuracy, given sufficiently many components, and that the model can be used to approximate any density function to a given degree of precision.
Abstract: Given sufficiently many components, it is often cited that finite mixture models can approximate any other probability density function (pdf) to an arbitrary degree of accuracy. Unfortunately, the ...

Journal ArticleDOI
TL;DR: A novel day-ahead load probability density forecasting method by transforming and combining multiple quantile forecasts that is robust to kernel function selection in the transformation step and has better forecasting performance.

Journal ArticleDOI
TL;DR: A two-period newsvendor problem where demand of different periods is interdependent (not independent), and a model takes into account interdependent demand to provide a better solution than a model based on independent demand is considered.
Abstract: The newsvendor problem is a classical task in inventory management The present paper considers a two-period newsvendor problem where demand of different periods is interdependent (not independent), and seeks to follow this approach to develop a two-period newsvendor problem with unsatisfied demand or unsold quantity Concerning the complexity of solution of multiple integrals, the problem is assessed for only two periods In the course of a numerical solution, the probability distribution function of demand pertaining to each period is assumed to be given (in the form of a bivariate normal distribution) The optimal solution is presented in the form of the initial inventory level that maximizes the expected profit Finally, all model parameters are subjected to a sensitivity analysis This model can be used in a number of applications, such as procurement of raw materials in projects (eg, construction, bridge-building and molding) where demand of different periods is interdependent Proposed model takes into account interdependent demand oughts to provide a better solution than a model based on independent demand

Journal ArticleDOI
TL;DR: In this paper, the nonlinear behaviors and vibration reduction of a linear system with nonlinear energy sink (NES) are investigated, where the linear system is excited by a harmonic and random base excitation, consisting of a mass block, a linear spring, and a linear viscous damper.
Abstract: The nonlinear behaviors and vibration reduction of a linear system with nonlinear energy sink (NES) are investigated. The linear system is excited by a harmonic and random base excitation, consisting of a mass block, a linear spring, and a linear viscous damper. The NES is composed of a mass block, a linear viscous damper, and a spring with ideal cubic nonlinear stiffness. Based on the generalized harmonic function method, the steady-state Fokker-Planck-Kolmogorov equation is presented to reveal the response of the system. The path integral method based on the Gauss-Legendre polynomial is used to achieve the numerical solutions. The performance of vibration reduction is evaluated by the displacement and velocity transition probability densities, the transmissibility transition probability density, and the percentage of the energy absorption transition probability density of the linear oscillator. The sensitivity of the parameters is analyzed for varying the nonlinear stiffness coefficient and the damper ratio. The investigation illustrates that a linear system with NES can also realize great vibration reduction under harmonic and random base excitations and random bifurcation may appear under different parameters, which will affect the stability of the system.

Journal ArticleDOI
TL;DR: A unified framework for the performance analysis of dual-hop underwater wireless optical communication (UWOC) systems with amplify-and-forward fixed gain relays in the presence of air bubbles and temperature gradients is presented and it is demonstrated that theDual-hop UWOC system can effectively mitigate the short range and bothTemperature gradients and air bubbles induced turbulences.
Abstract: In this work, we present a unified framework for the performance analysis of dual-hop underwater wireless optical communication (UWOC) systems with amplify-and-forward fixed gain relays in the presence of air bubbles and temperature gradients. Operating under either heterodyne detection or intensity modulation with direct detection, the UWOC is modeled by the unified mixture Exponential-Generalized Gamma distribution that we have proposed based on an experiment conducted in an indoor laboratory setup and has been shown to provide an excellent fit with the measured data under the considered lab channel scenarios. More specifically, we derive the cumulative distribution function (CDF) and the probability density function of the end-to-end signal-to-noise ratio (SNR) in exact closed-form in terms of the bivariate Fox’s H function. Based on this CDF expression, we present novel results for the fundamental performance metrics such as the outage probability, the average bit-error rate (BER) for various modulation schemes, and the ergodic capacity. Additionally, very tight asymptotic results for the outage probability and the average BER at high SNR are obtained in terms of simple functions. Furthermore, we demonstrate that the dual-hop UWOC system can effectively mitigate the short range and both temperature gradients and air bubbles induced turbulences, as compared to the single UWOC link. All the results are verified via computer-based Monte-Carlo simulations.

Journal ArticleDOI
TL;DR: In this article, a site-specific multivariate probability density function (PDF) of soil parameters based on limited and incomplete site specific investigation data is presented. But the method is not suitable for the analysis of large-scale sites.
Abstract: It is important to be able to construct a site-specific multivariate probability density function (PDF) of soil parameters based on limited and incomplete site-specific investigation data a...

Journal ArticleDOI
TL;DR: In this article, the fatigue probability calculation method based on the probability density evolution method is proposed to calculate the fatigue reliability of the tower flange and bolt of a 1.5MW wind turbine tower.

Journal ArticleDOI
TL;DR: In this article, a tensor train approximation to the target probability density function using a small number of function evaluations is proposed, which is based on low-rank surrogates in the tensor-train format, a methodology that has been exploited for many years for scalable, high-dimensional density function approximation in quantum physics and chemistry.
Abstract: General multivariate distributions are notoriously expensive to sample from, particularly the high-dimensional posterior distributions in PDE-constrained inverse problems. This paper develops a sampler for arbitrary continuous multivariate distributions that is based on low-rank surrogates in the tensor train format, a methodology that has been exploited for many years for scalable, high-dimensional density function approximation in quantum physics and chemistry. We build upon recent developments of the cross approximation algorithms in linear algebra to construct a tensor train approximation to the target probability density function using a small number of function evaluations. For sufficiently smooth distributions, the storage required for accurate tensor train approximations is moderate, scaling linearly with dimension. In turn, the structure of the tensor train surrogate allows sampling by an efficient conditional distribution method since marginal distributions are computable with linear complexity in dimension. Expected values of non-smooth quantities of interest, with respect to the surrogate distribution, can be estimated using transformed independent uniformly-random seeds that provide Monte Carlo quadrature or transformed points from a quasi-Monte Carlo lattice to give more efficient quasi-Monte Carlo quadrature. Unbiased estimates may be calculated by correcting the transformed random seeds using a Metropolis–Hastings accept/reject step, while the quasi-Monte Carlo quadrature may be corrected either by a control-variate strategy or by importance weighting. We show that the error in the tensor train approximation propagates linearly into the Metropolis–Hastings rejection rate and the integrated autocorrelation time of the resulting Markov chain; thus, the integrated autocorrelation time may be made arbitrarily close to 1, implying that, asymptotic in sample size, the cost per effectively independent sample is one target density evaluation plus the cheap tensor train surrogate proposal that has linear cost with dimension. These methods are demonstrated in three computed examples: fitting failure time of shock absorbers; a PDE-constrained inverse diffusion problem; and sampling from the Rosenbrock distribution. The delayed rejection adaptive Metropolis (DRAM) algorithm is used as a benchmark. In all computed examples, the importance weight-corrected quasi-Monte Carlo quadrature performs best and is more efficient than DRAM by orders of magnitude across a wide range of approximation accuracies and sample sizes. Indeed, all the methods developed here significantly outperform DRAM in all computed examples.

Journal ArticleDOI
TL;DR: In this article, experimental data for wet-and dry-ground coal samples under wet and dry grinding are characterized by commonly used distribution functions, and a time-dependent expression is drawn to describe the cumulative particle size distribution.

Journal ArticleDOI
TL;DR: In this paper, a soft piecewise version of batch normalization, called mixture normalization (MN), was proposed to accelerate the training of deep Convolutional Neural Networks (CNNs).
Abstract: Batch Normalization (BN) is essential to effectively train state-of-the-art deep Convolutional Neural Networks (CNN). It normalizes the layer outputs during training using the statistics of each mini-batch. BN accelerates training procedure by allowing to safely utilize large learning rates and alleviates the need for careful initialization of the parameters. In this work, we study BN from the viewpoint of Fisher kernels that arise from generative probability models. We show that assuming samples within a mini-batch are from the same probability density function, then BN is identical to the Fisher vector of a Gaussian distribution. That means batch normalizing transform can be explained in terms of kernels that naturally emerge from the probability density function that models the generative process of the underlying data distribution. Consequently, it promises higher discrimination power for the batch-normalized mini-batch. However, given the rectifying non-linearities employed in CNN architectures, distribution of the layer outputs show an asymmetric characteristic. Therefore, in order for BN to fully benefit from the aforementioned properties, we propose approximating underlying data distribution not with one, but a mixture of Gaussian densities. Deriving Fisher vector for a Gaussian Mixture Model (GMM), reveals that batch normalization can be improved by independently normalizing with respect to the statistics of disentangled sub-populations. We refer to our proposed soft piecewise version of batch normalization as Mixture Normalization (MN). Through extensive set of experiments on CIFAR-10 and CIFAR-100, using both a 5-layers deep CNN and modern Inception-V3 architecture, we show that mixture normalization reduces required number of gradient updates to reach the maximum test accuracy of the batch-normalized model by $\sim 31\%-47\%$ ∼ 31 % - 47 % across a variety of training scenarios. Replacing even a few BN modules with MN in the 48-layers deep Inception-V3 architecture is sufficient to not only obtain considerable training acceleration but also better final test accuracy. We show that similar observations are valid for 40 and 100-layers deep DenseNet architectures as well. We complement our study by evaluating the application of mixture normalization to the Generative Adversarial Networks (GANs), where “mode collapse” hinders the training process. We solely replace a few batch normalization layers in the generator with our proposed mixture normalization. Our experiments using Deep Convolutional GAN (DCGAN) on CIFAR-10 show that mixture-normalized DCGAN not only provides an acceleration of $\sim 58\%$ ∼ 58 % but also reaches lower (better) “Frechet Inception Distance” (FID) of 33.35 compared to 37.56 of its batch-normalized counterpart.

Journal ArticleDOI
TL;DR: This correspondence considers a mixed radio-frequency/underwater wireless optical communication (RF-UWOC) system with a fixed-gain amplify-and-forward relay, where the RF link experiences a Generalized-$K$ distribution while the UWOC link undergoes a mixture Exponential-Generalized Gamma distribution.
Abstract: In this correspondence, we consider a mixed radio-frequency/underwater wireless optical communication (RF-UWOC) system with a fixed-gain amplify-and-forward relay, where the RF link experiences a Generalized- $K$ distribution while the UWOC link undergoes a mixture Exponential-Generalized Gamma distribution. In our analysis, new closed-form expressions for the cumulative distribution function (CDF), and probability density function (PDF) of the end-to-end signal-to-noise ratio (SNR) are derived. Capitalizing on these useful statistics, the performance of the proposed mixed RF-UWOC system is investigated in terms of the outage probability, average bit error rate, and average channel capacity. Our analytical expressions are verified via Monte Carlo simulations results, and insightful discussions are presented.