scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 2014"


Journal ArticleDOI
TL;DR: In this article, an efficient stochastic framework is proposed to investigate the effect of uncertainty on the optimal operation management of MGs, which considers the uncertainties of load forecast error, wind turbine (WT) generation, photovoltaic (PV) generation and market price.

343 citations


Journal ArticleDOI
TL;DR: A new receptor modelling method is developed to identify and characterise emission sources by considering an area of high source complexity, where many new sources can be identified and characterised compared with currently used techniques.
Abstract: In this paper a new receptor modelling method is developed to identify and characterise emission sources. The method is an extension of the commonly used conditional probability function (CPF). The CPF approach is extended to the bivariate case to produce a conditional bivariate probability function (CBPF) plot using wind speed as a third variable plotted on the radial axis. The bivariate case provides more information on the type of sources being identified by providing important dispersion characteristic information. By considering intervals of concentration, considerably more source information can be revealed that is absent in the basic CPF or CBPF. We demonstrate the application of the approach by considering an area of high source complexity, where many new sources can be identified and characterised compared with currently used techniques. Dispersion model simulations are undertaken to verify the approach. The technique has been made available through the openair R package.

233 citations


Book
12 Mar 2014
TL;DR: It is shown how random variate genration algorithms work and an interface for R is suggested as an example of a statistical library, which could be used for simulation or statistical computing.
Abstract: Random variate genration is an important tool in statistical computing. Many programms for simulation or statistical computing (e.g. R) provide a collection of random variate generators for many standard distributions. However, as statistical modeling has become more sophisticated there is demand for larger classes of distributions. Adding generators for newly required distribution seems not to be the solution to this problem. Instead so called automatic (or black-box) methods have been developed in the last decade for sampling from fairly large classes of distributions with a single piece of code. For such algorithms a data about the distributions must be given; typically the density function (or probability mass function), and (maybe) the (approximate) location of the mode. In this contribution we show how such algorithms work and suggest an interface for R as an example of a statistical library. (author's abstract)

220 citations


Journal ArticleDOI
TL;DR: It is shown that the sum and maximum distributions of independent but arbitrarily distributed κ - μ shadowed variates can be expressed in closed form and this set of new statistical results is finally applied to modeling and analysis of several wireless communication systems, e.g., the proposed distribution has applications to land mobile satellite (LMS) communications and underwater acoustic communications (UAC).
Abstract: This paper investigates a natural generalization of the κ - μ fading channel in which the line-of-sight (LOS) component is subject to shadowing. This fading distribution has a clear physical interpretation and good analytical properties and unifies the one-side Gaussian, Rayleigh, Nakagami- m, Rician, κ - μ, and Rician shadow fading distributions. The three basic statistical characterizations, i.e., probability density function (pdf), cumulative distribution function (cdf), and moment-generating function (mgf), of the κ - μ shadowed distribution are obtained in closed form. Then, it is also shown that the sum and maximum distributions of independent but arbitrarily distributed κ - μ shadowed variates can be expressed in closed form. This set of new statistical results is finally applied to modeling and analysis of several wireless communication systems, e.g., the proposed distribution has applications to land mobile satellite (LMS) communications and underwater acoustic communications (UAC).

183 citations


Journal ArticleDOI
TL;DR: In this article, the authors study the fluctuations of the largest eigenvalue ε-max of N? N random matrices in the limit of large N. The main focus is on Gaussian ensembles, including in particular the Gaussian orthogonal (? = 1), unitary (?= 2) and symplectic (?Õ = 4) ensemble.
Abstract: We study the fluctuations of the largest eigenvalue ?max of N ? N random matrices in the limit of large N. The main focus is on Gaussian ? ensembles, including in particular the Gaussian orthogonal (? = 1), unitary (? = 2) and symplectic (? = 4) ensembles. The probability density function (PDF) of ?max consists, for large N, of a central part described by Tracy?Widom distributions flanked, on both sides, by two large deviation tails. While the central part characterizes the typical fluctuations of ?max?of order ?the large deviation tails are instead associated with extremely rare fluctuations?of order . Here we review some recent developments in the theory of these extremely rare events using a Coulomb gas approach. We discuss in particular the third order phase transition which separates the left tail from the right tail, a transition akin to the so-called Gross?Witten?Wadia phase transition found in 2-d lattice quantum chromodynamics. We also discuss the occurrence of similar third order transitions in various physical problems, including non-intersecting Brownian motions, conductance fluctuations in mesoscopic physics and entanglement in a bipartite system.

159 citations


Journal ArticleDOI
TL;DR: A new signal-filtering, which combines the empirical mode decomposition (EMD) and a similarity measure, to make use of partial reconstruction, the relevant modes being selected on the basis of a striking similarity between the pdf of the input signal and that of each mode.
Abstract: This paper introduces a new signal-filtering, which combines the empirical mode decomposition (EMD) and a similarity measure. A noisy signal is adaptively broken down into oscillatory components called intrinsic mode functions by EMD followed by an estimation of the probability density function (pdf) of each extracted mode. The key idea of this paper is to make use of partial reconstruction, the relevant modes being selected on the basis of a striking similarity between the pdf of the input signal and that of each mode. Different similarity measures are investigated and compared. The obtained results, on simulated and real signals, show the effectiveness of the pdf-based filtering strategy for removing both white Gaussian and colored noises and demonstrate its superior performance over partial reconstruction approaches reported in the literature.

151 citations


Journal ArticleDOI
TL;DR: In this paper, the time evolution of the probability density function (PDF) of the mass density is formulated and solved for systems in free-fall using a simple approximate function for the collapse of a sphere.
Abstract: The time evolution of the probability density function (PDF) of the mass density is formulated and solved for systems in free-fall using a simple approximate function for the collapse of a sphere. We demonstrate that a pressure-free collapse results in a power-law tail on the high-density side of the PDF. The slope quickly asymptotes to the functional form PV (ρ)∝ρ–1.54 for the (volume-weighted) PDF and PM (ρ)∝ρ–0.54 for the corresponding mass-weighted distribution. From the simple approximation of the PDF we derive analytic descriptions for mass accretion, finding that dynamically quiet systems with narrow density PDFs lead to retarded star formation and low star formation rates (SFRs). Conversely, strong turbulent motions that broaden the PDF accelerate the collapse causing a bursting mode of star formation. Finally, we compare our theoretical work with observations. The measured SFRs are consistent with our model during the early phases of the collapse. Comparison of observed column density PDFs with those derived from our model suggests that observed star-forming cores are roughly in free-fall.

147 citations


Journal ArticleDOI
TL;DR: In this article, the authors quantitatively identify the origin of anomalous transport in a representative model of a heterogeneous porous medium under uniform (in the mean) flow conditions, which arises in the complex flow patterns of lognormally distributed hydraulic conductivity fields, with several decades of K values.
Abstract: Anomalous (or “non-Fickian”) transport is ubiquitous in the context of tracer migration in geological formations. We quantitatively identify the origin of anomalous transport in a representative model of a heterogeneous porous medium under uniform (in the mean) flow conditions; we focus on anomalous transport which arises in the complex flow patterns of lognormally distributed hydraulic conductivity (K) fields, with several decades of K values. Transport in the domains is determined by a particle tracking technique and characterized by breakthrough curves (BTCs). The BTC averaged over multiple realizations demonstrates anomalous transport in all cases, which is accounted for entirely by a power law distribution ∼t−1−β of local transition times. The latter is contained in the probability density function ψ(t) of transition times, embedded in the framework of a continuous time random walk (CTRW). A unique feature of our analysis is the derivation of ψ(t) as a function of parameters quantifying the heterogeneity of the domain. In this context, we first establish the dominance of preferential pathways across each domain, and characterize the statistics of these pathways by forming a particle-visitation weighted histogram, Hw(K), of the hydraulic conductivity. By converting the ln(K) dependence of Hw(K) into time, we demonstrate the equivalence of Hw(K) and ψ(t), and delineate the region of Hw(K) that forms the power law of ψ(t). This thus defines the origin of anomalous transport. Analysis of the preferential pathways clearly demonstrates the limitations of critical path analysis and percolation theory as a basis for determining the origin of anomalous transport. Furthermore, we derive an expression defining the power law exponent β in terms of the Hw(K) parameters. The equivalence between Hw(K) and ψ(t) is a remarkable result, particularly given the nature of the K heterogeneity, the complexity of the flow field within each realization, and the statistics of the particle transitions.

129 citations


Journal ArticleDOI
TL;DR: This paper proposes to compute the failure probability by means of the recently proposed meta-model-based importance sampling method to reduce the computational cost when the limit-state function involves the output of an expensive-to-evaluate computational model.

127 citations


Journal ArticleDOI
TL;DR: In this paper, a model for earthquakes induced by subsurface reservoir volume changes is developed for the Groningen gas field, which is based on the work of Kostrov and McGarr.
Abstract: A seismological model is developed for earthquakes induced by subsurface reservoir volume changes. The approach is based on the work of Kostrov (1974) and McGarr (1976) linking total strain to the summed seismic moment in an earthquake catalog. We refer to the fraction of the total strain expressed as seismic moment as the strain partitioning function, α. A probability distribution for total seismic moment as a function of time is derived from an evolving earthquake catalog. The moment distribution is taken to be a Pareto Sum Distribution with confidence bounds estimated using approximations given by Zaliapin et al. (2005). In this way available seismic moment is expressed in terms of reservoir volume change and hence compaction in the case of a depleting reservoir. The Pareto Sum Distribution for moment and the Pareto Distribution underpinning the Gutenberg-Richter Law are sampled using Monte Carlo methods to simulate synthetic earthquake catalogs for subsequent estimation of seismic ground motion hazard. We demonstrate the method by applying it to the Groningen gas field. A compaction model for the field calibrated using various geodetic data allows reservoir strain due to gas extraction to be expressed as a function of both spatial position and time since the start of production. Fitting with a generalized logistic function gives an empirical expression for the dependence of α on reservoir compaction. Probability density maps for earthquake event locations can then be calculated from the compaction maps. Predicted seismic moment is shown to be strongly dependent on planned gas production.

123 citations


Journal ArticleDOI
TL;DR: The inherent mechanism of high efficiency and superior performance of COA is revealed, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps.

Posted Content
TL;DR: In this article, the authors derived the probability density function (PDF) and cumulative distribution function (CDF) of the minimum of two non-central Chi-square random variables with two degrees of freedom in terms of power series.
Abstract: In this letter, we derive the probability density function (PDF) and cumulative distribution function (CDF) of the minimum of two non-central Chi-square random variables with two degrees of freedom in terms of power series. With the help of the derived PDF and CDF, we obtain the exact ergodic capacity of the following adaptive protocols in a decode-and-forward (DF) cooperative system over dissimilar Rician fading channels: (i) constant power with optimal rate adaptation; (ii) optimal simultaneous power and rate adaptation; (iii) channel inversion with fixed rate. By using the analytical expressions of the capacity, it is observed that the optimal power and rate adaptation provides better capacity than the optimal rate adaptation with constant power from low to moderate signal-to-noise ratio values over dissimilar Rician fading channels. Despite low complexity, the channel inversion based adaptive transmission is shown to suffer from significant loss in capacity as compared to the other adaptive transmission based techniques over DF Rician channels.

Journal ArticleDOI
TL;DR: A family of multivariate heavy-tailed distributions that allow variable marginal amounts of tailweight and can account for a variety of shapes and have a simple tractable form with a closed-form probability density function whatever the dimension.
Abstract: We propose a family of multivariate heavy-tailed distributions that allow variable marginal amounts of tailweight. The originality comes from introducing multidimensional instead of univariate scale variables for the mixture of scaled Gaussian family of distributions. In contrast to most existing approaches, the derived distributions can account for a variety of shapes and have a simple tractable form with a closed-form probability density function whatever the dimension. We examine a number of properties of these distributions and illustrate them in the particular case of Pearson type VII and t tails. For these latter cases, we provide maximum likelihood estimation of the parameters and illustrate their modelling flexibility on simulated and real data clustering examples.

Journal ArticleDOI
TL;DR: In this paper, the performance of the multihop free-space optical (FSO) communication links using a heterodyne differential phase-shift keying modulation scheme operating over a turbulence induced fading channel is analyzed.
Abstract: This paper proposes and analyzes the performance of the multihop free-space optical (FSO) communication links using a heterodyne differential phase-shift keying modulation scheme operating over a turbulence induced fading channel. A novel statistical fading channel model for multihop FSO systems using channel-state-information-assisted and fixed-gain relays is developed incorporating the atmospheric turbulence, pointing errors, and path-loss effects. The closed-form expressions for the moment generating function, probability density function, and cumulative distribution function of the multihop FSO channel are derived using Meijer's G-function. They are then used to derive the fundamental limits of the outage probability and average symbol error rate. Results confirm the performance loss as a function of the number of hops. Effects of the turbulence strength varying from weak-to-moderate and moderate-to-strong turbulence, geometric loss, and pointing errors are studied. The pointing errors can be mitigated by widening the beam at the expense of the received power level, whereas narrowing the beam can reduce the geometric loss at the cost of increased misalignment effects.

Journal ArticleDOI
TL;DR: In this paper, the concept of the Wiener path integral in conjunction with a variational formulation is utilized to derive an approximate closed form solution for the system response non-stationary PDF.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the nonlinear stage of the modulation instability of the condensate and showed that the development of the MI leads to formation of "integrable turbulence" in integrable systems.
Abstract: In the framework of the focusing Nonlinear Schrodinger (NLS) equation we study numerically the nonlinear stage of the modulation instability (MI) of the condensate. As expected, the development of the MI leads to formation of "integrable turbulence" [V.E. Zakharov, Turbulence in integrable systems, Stud. in Appl. Math. 122, no. 3, 219-234, (2009)]. We study the time evolution of it's major characteristics averaged across realizations of initial data - the condensate solution seeded by small random noise with fixed statistical properties. The measured quantities are: (1) wave-action spectrum and spatial correlation function, (2) the probability density function (PDF) of wave amplitudes and their momenta, and (3) kinetic and potential energies.

Journal ArticleDOI
TL;DR: The iterated conditional modes (ICM) framework for the optimization of the maximum a posteriori (MAP-MRF) criterion function is extended to include a nonlocal probability maximization step, which has the potential to preserve spatial details and to reduce speckle effects.
Abstract: In remote sensing change detection, Markov random field (MRF) has been used successfully to model the prior probability using class-labels dependencies MRF has played an important role in the detection of complex urban changes using optical images However, the preservation of details in urban change analysis turns out to be a highly complex task if multitemporal SAR images with their speckle are to be used Here, the ability of MRF to preserve geometric details and to combat speckle effect at the same time becomes questionable Blob-region phenomenon and fine structures removal are common consequences of the application of traditional MRF-based change detection algorithm To overcome these limitations, the iterated conditional modes (ICM) framework for the optimization of the maximum a posteriori (MAP-MRF) criterion function is extended to include a nonlocal probability maximization step This probability model, which characterizes the relationship between pixels’ class-labels in a nonlocal scale, has the potential to preserve spatial details and to reduce speckle effects Two multitemporal SAR datasets were used to assess the proposed algorithm Experimental results using three density functions [ie, the log normal (LN), generalized Gaussian (GG), and normal distributions (ND)] have demonstrated the efficiency of the proposed approach in terms of detail preservation and noise suppression Compared with the traditional MRF algorithm, the proposed approach proved to be less-sensitive to the value of the contextual parameter and the chosen density function The proposed approach has also shown less sensitivity to the quality of the initial change map when compared with the ICM algorithm

Journal ArticleDOI
TL;DR: In this article, an efficient probabilistic method for predicting rainfall-induced slope failures based on Monte Carlo simulation is presented, which can calculate the time-dependent failure probability of the slope during a rainfall infiltration process.

Journal ArticleDOI
TL;DR: In this article, the authors developed an envelope approach to time-dependent mechanism reliability defined in a period of time where a certain motion output is required, where the envelope function of the motion error is not explicitly related to time.
Abstract: This work develops an envelope approach to time-dependent mechanism reliability defined in a period of time where a certain motion output is required. Since the envelope function of the motion error is not explicitly related to time, the time-dependent problem can be converted into a time-independent problem. The envelope function is approximated by piecewise hyperplanes. To find the expansion points for the hyperplanes, the approach linearizes the motion error at the means of random dimension variables, and this approximation is accurate because the tolerances of the dimension variables are small. The expansion points are found with the maximum probability density at the failure threshold. The time-dependent mechanism reliability is then estimated by a multivariable normal distribution at the expansion points. As an example, analytical equations are derived for a four-bar function generating mechanism. The numerical example shows the significant accuracy improvement.

Journal ArticleDOI
TL;DR: This paper proposes two general frameworks for analytically computing the outage probability at any arbitrary location of an arbitrarily-shaped finite wireless network: a moment generating function-based framework which is based on the numerical inversion of the Laplace transform of a cumulative distribution and a reference link power gain- based framework.
Abstract: This paper analyzes the outage performance in finite wireless networks. Unlike most prior works, which either assumed a specific network shape or considered a special location of the reference receiver, we propose two general frameworks for analytically computing the outage probability at any arbitrary location of an arbitrarily-shaped finite wireless network: (i) a moment generating function-based framework which is based on the numerical inversion of the Laplace transform of a cumulative distribution and (ii) a reference link power gain-based framework which exploits the distribution of the fading power gain between the reference transmitter and receiver. The outage probability is spatially averaged over both the fading distribution and the possible locations of the interferers. The boundary effects are accurately accounted for using the probability distribution function of the distance of a random node from the reference receiver. For the case of the node locations modeled by a Binomial point process and Nakagami-m fading channel, we demonstrate the use of the proposed frameworks to evaluate the outage probability at any location inside either a disk or polygon region. The analysis illustrates the location-dependent performance in finite wireless networks and highlights the importance of accurately modeling the boundary effects.

Proceedings Article
08 Dec 2014
TL;DR: This work considers plug-in algorithms that learn a classifier by applying an empirically determined threshold to a suitable 'estimate' of the class probability, and provides a general methodology to show consistency of these methods for any non-decomposable measure that can be expressed as a continuous function of true positive rate and true negative rate.
Abstract: We study consistency properties of algorithms for non-decomposable performance measures that cannot be expressed as a sum of losses on individual data points, such as the F-measure used in text retrieval and several other performance measures used in class imbalanced settings. While there has been much work on designing algorithms for such performance measures, there is limited understanding of the theoretical properties of these algorithms. Recently, Ye et al. (2012) showed consistency results for two algorithms that optimize the F-measure, but their results apply only to an idealized setting, where precise knowledge of the underlying probability distribution (in the form of the 'true' posterior class probability) is available to a learning algorithm. In this work, we consider plug-in algorithms that learn a classifier by applying an empirically determined threshold to a suitable 'estimate' of the class probability, and provide a general methodology to show consistency of these methods for any non-decomposable measure that can be expressed as a continuous function of true positive rate (TPR) and true negative rate (TNR), and for which the Bayes optimal classifier is the class probability function thresholded suitably. We use this template to derive consistency results for plug-in algorithms for the F-measure and for the geometric mean of TPR and precision; to our knowledge, these are the first such results for these measures. In addition, for continuous distributions, we show consistency of plug-in algorithms for any performance measure that is a continuous and monotonically increasing function of TPR and TNR. Experimental results confirm our theoretical findings.

Journal ArticleDOI
TL;DR: The authors compared regional and seasonal temperature and precipitation over land across two generations of global climate model ensembles, specifically, CMIP5 and CMIP3 through historical twentieth century skills and multi-model agreement, and twenty first century projections.
Abstract: Regional and seasonal temperature and precipitation over land are compared across two generations of global climate model ensembles, specifically, CMIP5 and CMIP3, through historical twentieth century skills and multi-model agreement, and twenty first century projections. A suite of diagnostic and performance metrics, ranging from spatial bias or model-consensus maps and aggregate time series plots, to measures of equivalence between probability density functions and Taylor diagrams, are used for the intercomparisons. Pairwise and multi-model ensemble comparisons were performed for 11 models, which were selected based on data availability and resolutions. Results suggest little change in the central tendency or variability or uncertainty of historical skills or consensus across the two generations of models. However, there are regions and seasons, at different levels of aggregation, where significant changes, performance improvements, and even degradation in skills, are suggested. The insights may provide directions for further improvements in next generations of climate models, and in the meantime, help inform adaptation and policy.

Journal ArticleDOI
TL;DR: In this paper, the wind speed data has been statistically analyzed using Weibull distribution to find out wind energy conversion characteristics of Hatiya Island in Bangladesh, and the authors found that more than 58% of the total hours in a year have wind speed above 6.0 m/s.

Journal ArticleDOI
TL;DR: In this article, an improved method is proposed to study recent and near-future dark matter direct detection experiments with small numbers of observed events. But the method is not suitable for large numbers of events and it requires a large number of experiments.
Abstract: We propose an improved method to study recent and near-future dark matter direct detection experiments with small numbers of observed events. Our method determines in a quantitative and halo-independent way whether the experiments point towards a consistent dark matter signal and identifies the best-fit dark matter parameters. To achieve true halo independence, we apply a recently developed method based on finding the velocity distribution that best describes a given set of data. For a quantitative global analysis we construct a likelihood function suitable for small numbers of events, which allows us to determine the best-fit particle physics properties of dark matter considering all experiments simultaneously. Based on this likelihood function we propose a new test statistic that quantifies how well the proposed model fits the data and how large the tension between different direct detection experiments is. We perform Monte Carlo simulations in order to determine the probability distribution function of this test statistic and to calculate the p-value for both the dark matter hypothesis and the background-only hypothesis.

Journal ArticleDOI
TL;DR: The statistical properties of nonlinear random waves that are ruled by the one-dimensional defocusing and integrable nonlinear Schrödinger equation are examined, and the phenomenon of intermittency is revealed.
Abstract: We examine the statistical properties of nonlinear random waves that are ruled by the one-dimensional defocusing and integrable nonlinear Schr\"odinger equation. Using fast detection techniques in an optical fiber experiment, we observe that the probability density function of light fluctuations is characterized by tails that are lower than those predicted by a Gaussian distribution. Moreover, by applying a bandpass frequency optical filter, we reveal the phenomenon of intermittency; i.e., small scales are characterized by large heavy-tailed deviations from Gaussian statistics, while the large ones are almost Gaussian. These phenomena are very well described by numerical simulations of the one-dimensional nonlinear Schr\"odinger equation.

Journal ArticleDOI
TL;DR: In this article, the concept of the Wiener path integral (WPI) is used in conjunction with a variational formulation to derive an approximate closed-form solution for the system response PDF.
Abstract: A novel approximate analytical technique is developed for determining the nonstationary response probability density function (PDF) of randomly excited nonlinear multidegree-of-freedom (MDOF) systems. Specifically, the concept of the Wiener path integral (WPI) is used in conjunction with a variational formulation to derive an approximate closed-form solution for the system response PDF. Notably, determining the nonstationary response PDF is accomplished without the need to advance the solution in short time steps as it is required by existing alternative numerical path integral solution schemes, which rely on a discrete version of the Chapman-Kolmogorov (C-K) equation. In this manner, the analytical WPI-based technique developed by the authors is extended and generalized herein to account for hysteretic nonlinearities and MDOF systems. This enhancement of the technique affords circumventing approximations associated with the stochastic averaging treatment of the previously developed technique. Hop...

Journal ArticleDOI
TL;DR: In this article, a five-parameter extension of the Weibull distribution capable of modeling a bathtub-shaped hazard rate function is introduced and studied, and quantile and generating functions, mean deviations, Bonferroni and Lorenz curves and reliability are provided.
Abstract: A five-parameter extension of the Weibull distribution capable of modelling a bathtub-shaped hazard rate function is introduced and studied. The beauty and importance of the new distribution lies in its ability to model both monotone and non-monotone failure rates that are quite common in lifetime problems and reliability. The proposed distribution has a number of well-known lifetime distributions as special sub-models, such as the Weibull, extreme value, exponentiated Weibull, generalized Rayleigh and modified Weibull (MW) distributions, among others. We obtain quantile and generating functions, mean deviations, Bonferroni and Lorenz curves and reliability. We provide explicit expressions for the density function of the order statistics and their moments. For the first time, we define the log-Kumaraswamy MW regression model to analyse censored data. The method of maximum likelihood is used for estimating the model parameters and the observed information matrix is determined. Two applications illustrate t...

Journal ArticleDOI
TL;DR: In this paper, an approach of deriving the annual runoff distribution using copulas from an annual rainfall-runoff model is proposed to provide an alternative annual runoff frequency analysis method in case of changing climatic variables.
Abstract: An approach of deriving the annual runoff distribution using copulas from an annual rainfall-runoff model is proposed to provide an alternative annual runoff frequency analysis method in case of changing climatic variables. The annual rainfall-runoff model is established on the basis of the Budyko formula to estimate annual runoff, with annual precipitation and potential evapotranspiration as input variables. The model contains one single parameter k that guarantees that annual water balance is satisfied. In the derivation of the annual runoff distribution, annual precipitation, annual potential evapotranspiration, and parameter k are treated as three random variables, while the annual runoff distribution is obtained by integrating the joint probability density function of the three random variables over the domain constrained by the annual rainfall-runoff model using the canonical vine copula. This copula-based derivation approach is tested for 40 watersheds in two large basins in China. The estimated annual runoff distribution performs well in most watersheds. The performance is mainly related to the accuracy of the marginal distribution of precipitation. The copula-based derivation approach can also be used in ungauged watersheds where the distribution of k at the local site is estimated from the regional information of the k variable, and it also has acceptable performance in most watersheds, while poor performance is observed in a few watersheds with low accuracy in the Budyko formula.

Journal ArticleDOI
TL;DR: In this paper, the trust region sub-problem was studied in the context of finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N−1)-dimensional sphere.
Abstract: Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N−1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the “trust region subproblem” or “constraint least square problem”. When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number $\mathcal{N}_{tot}$ of critical (stationary) points in the cost function landscape. In the first regime $\mathcal{N}_{tot}$ remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.

Journal ArticleDOI
TL;DR: This paper introduces a flexible class of dependent nonparametric priors, investigates their properties and derives a suitable sampling scheme which allows their concrete implementation, and develops a Markov Chain Monte Carlo algorithm for drawing posterior inferences.
Abstract: The proposal and study of dependent prior processes has been a major research focus in the recent Bayesian nonparametric literature. In this paper, we introduce a flexible class of dependent nonparametric priors, investigate their properties and derive a suitable sampling scheme which allows their concrete implementation. The proposed class is obtained by normalizing dependent completely random measures, where the dependence arises by virtue of a suitable construction of the Poisson random measures underlying the completely random measures. We first provide general distributional results for the whole class of dependent completely random measures and then we specialize them to two specific priors, which represent the natural candidates for concrete implementation due to their analytic tractability: the bivariate Dirichlet and normalized $\sigma$-stable processes. Our analytical results, and in particular the partially exchangeable partition probability function, form also the basis for the determination of a Markov Chain Monte Carlo algorithm for drawing posterior inferences, which reduces to the well-known Blackwell--MacQueen Polya urn scheme in the univariate case. Such an algorithm can be used for density estimation and for analyzing the clustering structure of the data and is illustrated through a real two-sample dataset example.