scispace - formally typeset
Search or ask a question

Showing papers on "Probability distribution published in 2022"


Journal ArticleDOI
01 Jan 2022-Energy
TL;DR: Wang et al. as mentioned in this paper proposed a data-driven optimization method for economic dispatch of integrated electricity and natural gas systems with wind uncertainty, whose probability distribution is free, based on limited historical data, which helps to improve the estimation of worst-case probability distribution.

42 citations


Journal ArticleDOI
TL;DR: A statistically rigorous threshold selection scheme integrating Bayesian inference strategy and Monte Carlo discordancy test is proposed to detect the the presence of damage by accommodating the uncertainties of measurements and the probabilistic model of TF.

34 citations


Journal ArticleDOI
TL;DR: In this paper , a deep learning-based joint chance constrained economic dispatch (ED) optimization framework was proposed for effective utilization of renewable energy in power systems, which seamlessly incorporates deep learning based optimization for effective utilisation of wind power in power system.
Abstract: This paper proposes a holistic framework of data-driven distributionally robust joint chance constrained economic dispatch (ED) optimization, which seamlessly incorporates deep learning-based optimization for effective utilization of renewable energy in power systems. By leveraging a deep generative adversarial network (GAN), an f -divergence-based ambiguity set of wind power distributions is constructed as a ball centered around the probability distribution induced by a generator neural network. In particular, the GAN is well suited for capturing complicated spatial and temporal correlations of wind power. Based upon this ambiguity set, a distributionally robust joint chance constrained ED model is developed to hedge against distributional uncertainty present in multiple constraints, without assuming a perfectly known probability distribution. The proposed deep learning based ED optimization framework greatly mitigates the conservatism inflicting on distributionally robust individual chance constrained optimization. Theoretical a priori bound on the required number of synthetic wind power data generated by GAN is explicitly derived for the multi-period ED problem to guarantee a predefined risk level. The effectiveness and scalability of the proposed approach are demonstrated in the six-bus and IEEE 118-bus systems by comparing with the state-of-the-art methods.

20 citations


Journal ArticleDOI
TL;DR: In this article , the structural responses of several mechanical systems are analyzed using their basic probabilistic characteristics, which have been validated using the Probabilistic semi-analytical approach, and also the crude Monte-Carlo simulation.

17 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a triplet metric driven multi-head graph neural network (GNN) augmented with decoupling adversarial learning to solve the problem of changing working conditions.

16 citations


Journal ArticleDOI
TL;DR: In this paper, a self-attention network is designed to generate the probability distribution of all patterns in a sentence, and then a probability distribution is applied as a constraint in the first Transformer layer to encourage its attention heads to follow the relational pattern structures.

13 citations


Journal ArticleDOI
TL;DR: In this paper , a non-equilibrium evolution of wave fields, as occurring over sudden bathymetry variations, can produce rogue seas with anomalous wave statistics, and the authors handle this process by modifying the Rayleigh distribution through the energetics of second-order theory and a nonhomogeneous reformulation of the Khintchine theorem.
Abstract: Abstract Non-equilibrium evolution of wave fields, as occurring over sudden bathymetry variations, can produce rogue seas with anomalous wave statistics. We handle this process by modifying the Rayleigh distribution through the energetics of second-order theory and a non-homogeneous reformulation of the Khintchine theorem. The resulting probability model reproduces the enhanced tail of the probability distribution of unidirectional wave tank experiments. It also describes why the peak of rogue wave probability appears atop the shoal, and explains the influence of depth on variations in peak intensity. Furthermore, we interpret rogue wave likelihoods in finite depth through the $H$–$\sigma$ diagram, allowing a quick prediction for the effects of rapid depth change apart from the probability distribution.

12 citations


Journal ArticleDOI
TL;DR: In this article , a nonlocal generalization of the standard probability theory based on the use of the general fractional calculus in the Luchko form is proposed, including nonlocal (general fractional) generalizations of probability density, cumulative distribution functions, probability, average values, and characteristic functions.
Abstract: Nonlocal generalization of the standard (classical) probability theory of a continuous distribution on a positive semi-axis is proposed. An approach to the formulation of a nonlocal generalization of the standard probability theory based on the use of the general fractional calculus in the Luchko form is proposed. Some basic concepts of the nonlocal probability theory are proposed, including nonlocal (general fractional) generalizations of probability density, cumulative distribution functions, probability, average values, and characteristic functions. Nonlocality is described by the pairs of Sonin kernels that belong to the Luchko set. Properties of the general fractional probability density function and the general fractional cumulative distribution function are described. The truncated GF probability density function, truncated GF cumulative distribution function, and truncated GF average values are defined. Examples of the general fractional (GF) probability distributions, the corresponding probability density functions, and cumulative distribution functions are described. Nonlocal (general fractional) distributions are described, including generalizations of uniform, degenerate, and exponential type distributions; distributions with the Mittag-Leffler, power law, Prabhakar, Kilbas–Saigo functions; and distributions that are described as convolutions of the operator kernels and standard probability density.

11 citations


Journal ArticleDOI
TL;DR: In this article, a prediction model for evaluating the probability distribution of pavement surface temperature in winter was developed, which consisted of two modules, namely, a Bayesian Structural Time Series module (BSTS) and a BNN module.

11 citations


Journal ArticleDOI
TL;DR: A new approach to effectively compute the probability distribution of the sum of independent and identically hypergeometric-distributed random variables is presented, which reduces the computational effort to a few seconds while keeping a remarkable high accuracy with only negligible deviations compared to the exact distribution obtained via convolution.

11 citations


Journal ArticleDOI
A. N. Kostrygin1
TL;DR: In this article , a prediction model for evaluating the probability distribution of pavement surface temperature in winter was developed, which consisted of two modules, namely, a Bayesian Structural Time Series module (BSTS) and a BNN module.

Journal ArticleDOI
TL;DR: A discrete kernel estimator in conjunction with exponential weighted moving average scheme is introduced in this study to recursively estimate the time-varying target birth cardinality distribution at each processing step and can reduce the delay of standard GMCPHD filter's response to the changes in the number of targets, and thus improve the accuracy of cardinality estimates.

Journal ArticleDOI
TL;DR: The high predicted accuracy demonstrates that the reliability of oil condition prediction can be guaranteed even with small samples, and the proposed data augmentation method is proposed for improved prediction by integrating degradation mechanisms and monitoring data.

Journal ArticleDOI
TL;DR: In this paper , a variant of the sparse identification of nonlinear dynamics (SINDy) is developed, which integrates automatic differentiation and recent time-stepping constrained motivated by Rudy et al (2019 J. Computat. Phys.
Abstract: Abstract The sparse identification of nonlinear dynamics (SINDy) is a regression framework for the discovery of parsimonious dynamic models and governing equations from time-series data. As with all system identification methods, noisy measurements compromise the accuracy and robustness of the model discovery procedure. In this work we develop a variant of the SINDy algorithm that integrates automatic differentiation and recent time-stepping constrained motivated by Rudy et al (2019 J. Computat. Phys. 396 483–506) for simultaneously (1) denoising the data, (2) learning and parametrizing the noise probability distribution, and (3) identifying the underlying parsimonious dynamical system responsible for generating the time-series data. Thus within an integrated optimization framework, noise can be separated from signal, resulting in an architecture that is approximately twice as robust to noise as state-of-the-art methods, handling as much as 40% noise on a given time-series signal and explicitly parametrizing the noise probability distribution. We demonstrate this approach on several numerical examples, from Lotka-Volterra models to the spatio-temporal Lorenz 96 model. Further, we show the method can learn a diversity of probability distributions for the measurement noise, including Gaussian, uniform, Gamma, and Rayleigh distributions.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a multi-peak joint probability distribution model for estimating the environmental surface of a bridge in a deep-cut gorge bridge site, where the probability segmentation-based first order inverse reliability method (PSB-IFORM) was used to solve the problem.

Journal ArticleDOI
TL;DR: A Riemannian Manifold Hamiltonian Monte Carlo based subset simulation (RMHMC-SS) method to overcome limitations of existing Monte Carlo approaches in solving reliability problems defined in highly-curved non-Gaussian spaces is proposed.

Journal ArticleDOI
TL;DR: In this article , the authors used the metaheuristic Aquila Optimizer (AO) method to estimate the parameters of proposed original and mixture PDFs in order to model wind speed characteristics.

DOI
01 Jan 2022
TL;DR: In this paper, a flexible probability mass function is proposed for modeling count data, especially, asymmetric, and over-dispersed observations, which can be expressed in explicit forms which makes the proposed model useful in time series and regression analysis.
Abstract: In this paper, a flexible probability mass function is proposed for modeling count data, especially, asymmetric, and over-dispersed observations. Some of its distributional properties are investigated. It is found that all its statistical properties can be expressed in explicit forms which makes the proposed model useful in time series and regression analysis. Different estimation approaches including maximum likelihood, moments, least squares, Andersonӳ-Darling, Cramer von-Mises, and maximum product of spacing estimator, are derived to get the best estimator for the real data. The estimation performance of these estimation techniques is assessed via a comprehensive simulation study. The flexibility of the new discrete distribution is assessed using four distinctive real data sets ԣoronavirus-flood peaks-forest fire-Leukemia? Finally, the new probabilistic model can serve as an alternative distribution to other competitive distributions available in the literature for modeling count data.

Journal ArticleDOI
TL;DR: This paper considers the computational model of a dynamic aerospace system and addresses the issues posed by the NASA Langley Uncertainty Quantification Challenge on Optimization Under Uncertaint, which comprises six tasks.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a method to find the probability of each topology given a set of measured values by tracing how the measurements affect the shape of the new consolidated pdf in the new topology compared to the original , and the mathematical treatment of extracting such probabilities is presented and the proposed technique is validated in numerical studies in terms of accuracy, speed, and versatility.
Abstract: The status of switches in distribution systems might be changed for protection purposes or to achieve the operation objectives. However, the system topology should be known for the efficacious management and control of these systems. In most practical systems, due to the lack of enough measurements, the topology cannot be identified accurately and speedily. The forecast output can be transformed into the joint probability density function (pdf) of uncertain parameters, e.g., power demands. The measured data and this pdf are first conflated to update the joint pdf to best comply with the measurements. The topology identification problem, i.e., finding the probability of each topology given a set of measured values, is converted to a simpler form, i.e., finding the probability of observing these measured values in this topology. These probabilities are then calculated by tracing how the measurements affect the shape of the new consolidated pdf in each topology compared to the original pdf. The proposed technique employs as much data as available, is able to find the accurate probabilities, and is yet quite fast. The mathematical treatment of extracting such probabilities is presented and the proposed technique is validated in numerical studies in terms of accuracy, speed, and versatility.

Journal ArticleDOI
TL;DR: In this article , a data-driven sparse polynomial chaos expansion-based surrogate model for the stochastic economic dispatch problem considering uncertainty from wind power is proposed, which can provide accurate estimations for the statistical information (e.g., mean, variance, probability density function, and cumulative distribution function) without requiring the probability distributions of random inputs.
Abstract: This letter proposes a data-driven sparse polynomial chaos expansion-based surrogate model for the stochastic economic dispatch problem considering uncertainty from wind power. The proposed method can provide accurate estimations for the statistical information (e.g., mean, variance, probability density function, and cumulative distribution function) for the stochastic economic dispatch solution efficiently without requiring the probability distributions of random inputs. Simulation studies on an integrated electricity and gas system (IEEE 118-bus system integrated with a 20-node gas system) are presented, demonstrating the efficiency and accuracy of the proposed method compared to the Monte Carlo simulations.

Journal ArticleDOI
TL;DR: In this article , the authors presented improved approximations and, in some cases, new approximate methods for the estimation with the method of ordinary moments and with linear moments, which are useful for direct calculation of the parameters, because the errors in the approximate estimation are similar to the use of iterative numerical methods.
Abstract: Estimating the parameters of probability distributions generally involves solving a system of nonlinear equations or a nonlinear equation, being a technical difficulty in their usual application in hydrology. The choice of probability distributions for the calculation of extreme values in hydrology is, in most cases, made according to the ease of calculation of the estimated parameters and the explicit form of the inverse probability function. This article presents improved approximations and, in some cases, new approximations for the estimation with the method of ordinary moments and the method of linear moments, which are useful for the direct calculation of the parameters, because the errors in the approximate estimation are similar to the use of iterative numerical methods. Thirteen probability distributions of two and three parameters frequently used in hydrology are presented, for which parameter estimation was laborious. Thus, the approximate estimation of the parameters by the two methods is simple but also precise and easily applicable by hydrology researchers. The new and improved approximate forms presented in this article are the result of the research conducted within the Faculty of Hydrotechnics to update the Romanian normative standards in the hydrotechnical field.


Journal ArticleDOI
TL;DR: In this article , it was shown that the late-time probability distribution of the pattern corresponds to the canonical probability distribution in the antiferromagnetic Ising model and can be generated by dynamics different from the commonly-used Glauber.
Abstract: The ocellated lizard (Timon lepidus) exhibits an intricate skin color pattern made of monochromatic black and green skin scales, whose dynamics of color flipping are known to be well modeled by a stochastic cellular automaton. We show that the late-time probability distribution of the pattern corresponds to the canonical probability distribution of the antiferromagnetic Ising model and can be generated by dynamics different from the commonly-used Glauber. We comment on skin scale patterns generated by the Ising model on the triangular lattice in the low-temperature limit.

Journal ArticleDOI
TL;DR: In this article , the authors proposed an objective and unbiased method to estimate probability distributions of a soil property using the maximum entropy method from fractional moments of observed data, based on the concept of maximum entropy and is free from the assumptions of classical distributions.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a probability distribution model for stress concentration factor (SCF) in tubular T/Y-connections with FRP under axial, IPB and OPB loads.

Journal ArticleDOI
TL;DR: In this paper , the authors established general upper bounds on the Kolmogorov distance between two probability distributions in terms of the distance between these distributions as measured with respect to the Wasserstein or smooth wasserstein metrics.

Journal ArticleDOI
TL;DR: In this paper , a risk-based approach to support water utilities in terms of defining pipe rehabilitation priorities is presented, where numerical values are assigned to both consequence and probability in the quantitative risk analysis.
Abstract: A risk-based approach to support water utilities in terms of defining pipe rehabilitation priorities is presented. In a risk analysis in the risk management process, the probability that a given event will happen and the consequences if it does happen have to be estimated and combined. In the quantitative risk analysis, numerical values are assigned to both consequence and probability. In this study, the risk event addressed was the inability to supply water due to pipe breaks. Therefore, on the probability side, the probability of pipes breaking was assessed, and on the consequence side, the reduced ability to satisfy the water demand (hydraulic reliability) due to pipe breakage was computed. Random Forest analysis was implemented for the probability side, while the Asset Vulnerability Analysis Toolkit was used to analyse the network’s hydraulic reliability. Pipes could then be ranked based on the corresponding risk magnitude, thereby feeding a risk evaluation step; at this step, decisions are made concerning which risks need treatment, and also concerning the treatment priorities, i.e., rehabilitation priorities. The water distribution network of Trondheim, Norway, was used as a case study area, and this study illustrates how the developed method aids the development of a risk-based rehabilitation plan.


Journal ArticleDOI
TL;DR: This paper proposes computationally efficient inner and outer approximations for DRO problems under a piecewise linear objective function and with a moment-based ambiguity set and a combined ambiguity set including Wasserstein distance and moment information and verifies the significant efficiency and practical applicability of these approximation in solving both production–transportation and multiproduct newsvendor problems.
Abstract: Distributionally robust optimization (DRO) is a modeling framework in decision making under uncertainty in which the probability distribution of a random parameter is unknown although its partial information (e.g., statistical properties) is available. In this framework, the unknown probability distribution is assumed to lie in an ambiguity set consisting of all distributions that are compatible with the available partial information. Although DRO bridges the gap between stochastic programming and robust optimization, one of its limitations is that its models for large-scale problems can be significantly difficult to solve, especially when the uncertainty is of high dimension. In this paper, we propose computationally efficient inner and outer approximations for DRO problems under a piecewise linear objective function and with a moment-based ambiguity set and a combined ambiguity set including Wasserstein distance and moment information. In these approximations, we split a random vector into smaller pieces, leading to smaller matrix constraints. In addition, we use principal component analysis to shrink uncertainty space dimensionality. We quantify the quality of the developed approximations by deriving theoretical bounds on their optimality gap. We display the practical applicability of the proposed approximations in a production–transportation problem and a multiproduct newsvendor problem. The results demonstrate that these approximations dramatically reduce the computational time while maintaining high solution quality. The approximations also help construct an interval that is tight for most cases and includes the (unknown) optimal value for a large-scale DRO problem, which usually cannot be solved to optimality (or even feasibility in most cases). Summary of Contribution: This paper studies an important type of optimization problem, that is, distributionally robust optimization problems, by developing computationally efficient inner and outer approximations via operations research tools. Specifically, we consider several variants of such problems that are practically important and that admit tractable yet large-scale reformulation. We accordingly utilize random vector partition and principal component analysis to derive efficient approximations with smaller sizes, which, more importantly, provide a theoretical performance guarantee with respect to low optimality gaps. We verify the significant efficiency (i.e., reducing computational time while maintaining high solution quality) of our proposed approximations in solving both production–transportation and multiproduct newsvendor problems via extensive computing experiments.