scispace - formally typeset
Search or ask a question

Showing papers on "Uncertainty quantification published in 2017"


Posted Content
TL;DR: A Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty is presented, which makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.
Abstract: There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model -- uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.

2,616 citations


Proceedings Article
04 Dec 2017
TL;DR: In this paper, a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty was proposed for semantic segmentation and depth regression tasks, which can be interpreted as learned attenuation.
Abstract: There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model - uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.

1,263 citations


Journal ArticleDOI
TL;DR: A simple data assimilation framework for calibrating mathematical models based on ordinary differential equation models using time series data describing the temporal progression of case counts relating to population growth or infectious disease transmission dynamics is reviewed.

312 citations


Journal ArticleDOI
TL;DR: A probabilistic framework based on Gaussian process regression and nonlinear autoregressive schemes that is capable of learning complex nonlinear and space-dependent cross-correlations between models of variable fidelity, and can effectively safeguard against low-fidelity models that provide wrong trends is put forth.
Abstract: Multi-fidelity modelling enables accurate inference of quantities of interest by synergistically combining realizations of low-cost/low-fidelity models with a small set of high-fidelity observations. This is particularly effective when the low- and high-fidelity models exhibit strong correlations, and can lead to significant computational gains over approaches that solely rely on high-fidelity models. However, in many cases of practical interest, low-fidelity models can only be well correlated to their high-fidelity counterparts for a specific range of input parameters, and potentially return wrong trends and erroneous predictions if probed outside of their validity regime. Here we put forth a probabilistic framework based on Gaussian process regression and nonlinear autoregressive schemes that is capable of learning complex nonlinear and space-dependent cross-correlations between models of variable fidelity, and can effectively safeguard against low-fidelity models that provide wrong trends. This introduces a new class of multi-fidelity information fusion algorithms that provide a fundamental extension to the existing linear autoregressive methodologies, while still maintaining the same algorithmic complexity and overall computational cost. The performance of the proposed methods is tested in several benchmark problems involving both synthetic and real multi-fidelity datasets from computational fluid dynamics simulations.

259 citations


Journal ArticleDOI
TL;DR: This review article will address the two principal components of the cardiovascular system: arterial circulation and heart function, and systematically describe all aspects of the problem, ranging from data imaging acquisition to the development of reduced-order models that are of paramount importance when solving problems with high complexity, which would otherwise be out of reach.
Abstract: Mathematical and numerical modelling of the cardiovascular system is a research topic that has attracted remarkable interest from the mathematical community because of its intrinsic mathematical difficulty and the increasing impact of cardiovascular diseases worldwide. In this review article we will address the two principal components of the cardiovascular system: arterial circulation and heart function. We will systematically describe all aspects of the problem, ranging from data imaging acquisition, stating the basic physical principles, analysing the associated mathematical models that comprise PDE and ODE systems, proposing sound and efficient numerical methods for their approximation, and simulating both benchmark problems and clinically inspired problems. Mathematical modelling itself imposes tremendous challenges, due to the amazing complexity of the cardiocirculatory system, the multiscale nature of the physiological processes involved, and the need to devise computational methods that are stable, reliable and efficient. Critical issues involve filtering the data, identifying the parameters of mathematical models, devising optimal treatments and accounting for uncertainties. For this reason, we will devote the last part of the paper to control and inverse problems, including parameter estimation, uncertainty quantification and the development of reduced-order models that are of paramount importance when solving problems with high complexity, which would otherwise be out of reach.

176 citations


Journal ArticleDOI
TL;DR: Differences between various multi-fidelity surrogate (MFS) frameworks are investigated with the aid of examples including algebraic functions and a borehole example, found to be more useful for saving computational time rather than improving accuracy.
Abstract: Different multi-fidelity surrogate (MFS) frameworks have been used for optimization or uncertainty quantification. This paper investigates differences between various MFS frameworks with the aid of examples including algebraic functions and a borehole example. These MFS include three Bayesian frameworks using 1) a model discrepancy function, 2) low fidelity model calibration and 3) a comprehensive approach combining both. Three counterparts in simple frameworks are also included, which have the same functional form but can be built with ready-made surrogates. The sensitivity of frameworks to the choice of design of experiments (DOE) is investigated by repeating calculations with 100 different DOEs. Computational cost savings and accuracy improvement over a single fidelity surrogate model are investigated as a function of the ratio of the sampling costs between low and high fidelity simulations. For the examples considered, MFS frameworks were found to be more useful for saving computational time rather than improving accuracy. For the Hartmann 6 function example, the maximum cost saving for the same accuracy was 86 %, while the maximum accuracy improvement for the same cost was 51 %. It was also found that DOE can substantially change the relative standing of different frameworks. The cross-validation error appears to be a reasonable candidate for estimating poor MFS frameworks for a specific problem but it does not perform well compared to choosing single fidelity surrogates.

160 citations


Journal ArticleDOI
TL;DR: In this paper, an integrated numerical framework was presented to co-optimize EOR and CO2 storage performance under uncertainty in the Farnsworth Unit (FWU) oil field in Ochiltree County, Texas.

159 citations


Journal ArticleDOI
TL;DR: This work proposes a new method to provide local metamodel error estimates based on bootstrap resampling and sparse PCE, and demonstrates the effectiveness of this approach on a well-known analytical benchmark representing a series system, on a truss structure and on a complex realistic frame structure problem.
Abstract: Polynomial chaos expansions (PCE) have seen widespread use in the context of uncertainty quantification. However, their application to structural reliability problems has been hindered by the limited performance of PCE in the tails of the model response and due to the lack of local metamodel error estimates. We propose a new method to provide local metamodel error estimates based on bootstrap resampling and sparse PCE. An initial experimental design is iteratively updated based on the current estimation of the limit-state surface in an active learning algorithm. The greedy algorithm uses the bootstrap-based local error estimates for the polynomial chaos predictor to identify the best candidate set of points to enrich the experimental design. We demonstrate the effectiveness of this approach on a well-known analytical benchmark representing a series system, on a truss structure and on a complex realistic frame structure problem.

134 citations


Journal ArticleDOI
TL;DR: This paper proposes the concept of probabilistic dual hesitant fuzzy set (PDHFS), defines the basic operational laws and some aggregation operators of the PDHFS for the information fusion, and proposes a visualization method based on the entropy of PDH FS so as to analyze the aggregated information and improve the final evaluation results.
Abstract: Propose the concept of the probabilistic dual hesitant fuzzy set (PDHFS).Define the basic operational laws and some aggregation operators of the PDHFS for the information fusion.Propose the concept of entropy for the PDHFS and develop a visual analysis method for the aggregated results.Apply the proposed theory and methods to the Arctic geopolitical risk evaluation. The concurrence of randomness and imprecision widely exists in real-world problems. To describe the aleatory and epistemic uncertainty in a single framework and take more information into account, in this paper, we propose the concept of probabilistic dual hesitant fuzzy set (PDHFS) and define the basic operation laws of PDHFSs. For the purpose of applications, we also develop the basic aggregation operator for PDHFSs and give the general procedures for information fusion. Next, we propose a visualization method based on the entropy of PDHFSs so as to analyze the aggregated information and improve the final evaluation results. The proposed method is then applied to the risk evaluations. A case study of the Arctic geopolitical risk evaluation is presented to illustrate the validity and effectiveness. Finally, we discuss the advantages and the limitations of the PDHFS in detail.

131 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a concise state-of-the-art review along with an exhaustive comparative investigation on surrogate models for critical comparative assessment of uncertainty in natural frequencies of composite plates on the basis of computational efficiency and accuracy.

123 citations


Journal ArticleDOI
TL;DR: In this paper, the prior distribution on the bias is orthogonal to the gradient of the computer model, which results in an issue where the posterior of the parameter is suboptimally broad.
Abstract: Bayesian calibration is used to study computer models in the presence of both a calibration parameter and model bias. The parameter in the predominant methodology is left undefined. This results in an issue, where the posterior of the parameter is suboptimally broad. There has been no generally accepted alternatives to date. This article proposes using Bayesian calibration, where the prior distribution on the bias is orthogonal to the gradient of the computer model. Problems associated with Bayesian calibration are shown to be mitigated through analytic results in addition to examples. Supplementary materials for this article are available online.

Posted Content
TL;DR: In this article, a re-parametrisation of the alpha-divergence objective is proposed, which can be easily implemented with existing models by simply changing the loss of the model.
Abstract: To obtain uncertainty estimates with real-world Bayesian deep learning models, practical inference approximations are needed. Dropout variational inference (VI) for example has been used for machine vision and medical applications, but VI can severely underestimates model uncertainty. Alpha-divergences are alternative divergences to VI's KL objective, which are able to avoid VI's uncertainty underestimation. But these are hard to use in practice: existing techniques can only use Gaussian approximating distributions, and require existing models to be changed radically, thus are of limited use for practitioners. We propose a re-parametrisation of the alpha-divergence objectives, deriving a simple inference technique which, together with dropout, can be easily implemented with existing models by simply changing the loss of the model. We demonstrate improved uncertainty estimates and accuracy compared to VI in dropout networks. We study our model's epistemic uncertainty far away from the data using adversarial images, showing that these can be distinguished from non-adversarial images by examining our model's uncertainty.

Proceedings Article
06 Aug 2017
TL;DR: A reparametrisation of the alpha-divergence objectives is proposed, deriving a simple inference technique which, together with dropout, can be easily implemented with existing models by simply changing the loss of the model.
Abstract: To obtain uncertainty estimates with real-world Bayesian deep learning models, practical inference approximations are needed. Dropout variational inference (VI) for example has been used for machine vision and medical applications, but VI can severely underestimates model uncertainty. Alpha-divergences are alternative divergences to VI's KL objective, which are able to avoid VI's uncertainty underestimation. But these are hard to use in practice: existing techniques can only use Gaussian approximating distributions, and require existing models to be changed radically, thus are of limited use for practitioners. We propose a reparametrisation of the alpha-divergence objectives, deriving a simple inference technique which, together with dropout, can be easily implemented with existing models by simply changing the loss of the model. We demonstrate improved uncertainty estimates and accuracy compared to VI in dropout networks. We study our model's epistemic uncertainty far away from the data using adversarial images, showing that these can be distinguished from non-adversarial images by examining our model's uncertainty.

Journal ArticleDOI
TL;DR: In this paper, the basis-adaptive sparse polynomial chaos (BASPC) expansion is introduced to perform the probabilistic power flow analysis in power systems, taking advantage of three state-of-the-art uncertainty quantification methodologies reasonably: the hyperbolic scheme to truncate the infinite polynomials series, the least angle regression (LARS) technique to select the optimal degree of each univariate PC series, and the Copula to deal with nonlinear correlations among random input variables.
Abstract: This paper introduces the basis-adaptive sparse polynomial chaos (BASPC) expansion to perform the probabilistic power flow (PPF) analysis in power systems. The proposed method takes advantage of three state-of-the-art uncertainty quantification methodologies reasonably: the hyperbolic scheme to truncate the infinite polynomial chaos (PC) series; the least angle regression (LARS) technique to select the optimal degree of each univariate PC series; and the Copula to deal with nonlinear correlations among random input variables. Consequently, the proposed method brings appealing features to PPF, including the ability to handle the large-scale uncertainty sources; to tackle the nonlinear correlation among the random inputs; to analytically calculate representative statistics of the desired outputs; and to dramatically alleviate the computational burden as of traditional methods. The accuracy and efficiency of the proposed method are verified through either quantitative indicators or graphical results of PPF on both the IEEE European Low Voltage Test Feeder and the IEEE 123 Node Test Feeder, in the presence of more than 100 correlated uncertain input variables.

Journal ArticleDOI
TL;DR: In this paper, a critical comparative assessment of Kriging model variants for surrogate based uncertainty propagation considering stochastic natural frequencies of composite doubly curved shells is presented, where the effect of noise in uncertainty propagation is addressed by using the Stochastic kriging.
Abstract: This paper presents a critical comparative assessment of Kriging model variants for surrogate based uncertainty propagation considering stochastic natural frequencies of composite doubly curved shells. The five Kriging model variants studied here are: Ordinary Kriging, Universal Kriging based on pseudo-likelihood estimator, Blind Kriging, Co-Kriging and Universal Kriging based on marginal likelihood estimator. First three stochastic natural frequencies of the composite shell are analysed by using a finite element model that includes the effects of transverse shear deformation based on Mindlin’s theory in conjunction with a layer-wise random variable approach. The comparative assessment is carried out to address the accuracy and computational efficiency of five Kriging model variants. Comparative performance of different covariance functions is also studied. Subsequently the effect of noise in uncertainty propagation is addressed by using the Stochastic Kriging. Representative results are presented for both individual and combined stochasticity in layer-wise input parameters to address performance of various Kriging variants for low dimensional and relatively higher dimensional input parameter spaces. The error estimation and convergence studies are conducted with respect to original Monte Carlo Simulation to justify merit of the present investigation. The study reveals that Universal Kriging coupled with marginal likelihood estimate yields the most accurate results, followed by Co-Kriging and Blind Kriging. As far as computational efficiency of the Kriging models is concerned, it is observed that for high-dimensional problems, CPU time required for building the Co-Kriging model is significantly less as compared to other Kriging variants.

Journal ArticleDOI
TL;DR: An improved distance-based total uncertainty measure has been proposed to overcome the limitations of Yang and Han’s uncertainty measure and not only retains the desired properties of original measure, but also possesses higher sensitivity to the change of evidences.
Abstract: Uncertainty quantification of mass functions is a crucial and unsolved issue in belief function theory. Previous studies have mostly considered this problem from the perspective of viewing the belief function theory as an extension of probability theory. Recently, Yang and Han have developed a new distance-based total uncertainty measure directly and totally based on the framework of belief function theory so that there is no switch between the frameworks of belief function theory and probability theory in that measure. However, we have found some obvious deficiencies in Yang and Han's uncertainty measure which could lead to counter-intuitive results in some cases. In this paper, an improved distance-based total uncertainty measure has been proposed to overcome the limitations of Yang and Han's uncertainty measure. The proposed measure not only retains the desired properties of original measure, but also possesses higher sensitivity to the change of evidences. A number of examples and applications have verified the effectiveness and rationality of the proposed uncertainty measure.

Journal ArticleDOI
TL;DR: The results show that the developed sparse regression technique is able to identify the most significant PC contributions describing the problem and the most important stochastic features are captured at a reduced computational cost compared to the LAR method.

Journal ArticleDOI
TL;DR: In this article, the number of layers is also treated as a variable in the reverse jump Markov chain Monte Carlo (RJMCMC) approach, which is a tool for model exploration and uncertainty quantification.
Abstract: Prestack or angle stack gathers are inverted to estimate pseudologs at every surface location for building reservoir models. Recently, several methods have been proposed to increase the resolution of the inverted models. All of these methods, however, require that the total number of model parameters be fixed a priori. We have investigated an alternate approach in which we allow the data themselves to choose model parameterization. In other words, in addition to the layer properties, the number of layers is also treated as a variable in our formulation. Such transdimensional inverse problems are generally solved by using the reversible jump Markov chain Monte Carlo (RJMCMC) approach, which is a tool for model exploration and uncertainty quantification. This method, however, has very low acceptance. We have developed a two-step method by combining RJMCMC with a fixed-dimensional MCMC called Hamiltonian Monte Carlo, which makes use of gradient information to take large steps. Acceptance probability ...

Journal ArticleDOI
TL;DR: This review aims at helping hydrogeologists and hydrogeophysicists to identify suitable approaches for UQ that can be applied and further developed to their specific needs and considers hydrogeophysical inversion, where petrophysical uncertainty is often ignored leading to overconfident parameter estimation.

Journal ArticleDOI
TL;DR: In this article, a framework for structural health monitoring (SHM) and damage identification of civil structures is presented, which integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of interest.

Journal ArticleDOI
TL;DR: Results show that the Bayesian flood forecasting approach is an effective and advanced way for flood estimation, it considers all sources of uncertainties and produces a predictive distribution of the river stage, river discharge or runoff, thus gives more accurate and reliable flood forecasts.

Journal ArticleDOI
TL;DR: In this article, a stochastic approach to study the natural frequencies of thin-walled laminated composite beams with spatially varying matrix cracking damage in a multi-scale framework is introduced.

Journal ArticleDOI
TL;DR: The method builds upon recent work on recursive Bayesian techniques and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones, and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization.

Proceedings ArticleDOI
19 Mar 2017
TL;DR: The basics of polynomial chaos expansions and low-rank tensor approximations are given together with hints on how to derive the statistics of interest, namely moments, sensitivity indices or probabilities of failure.
Abstract: Uncertainty quantification has become a hot topic in computational sciences in the last decade. Indeed computer models (a.k.a simulators) are becoming more and more complex and demanding, yet the knowledge of the input parameters to feed into the model is usually limited. Based on the available data and possibly expert knowledge, parameters are represented by random variables. Of crucial interest is the propagation of the uncertainties through the simulator so as to estimate statistics of the quantities of interest. Monte Carlo simulation, a popular technique based on random number simulation, is unaffordable in practice when each simulator run takes minutes to hours. In this contribution we shortly review recent techniques to bypass Monte Carlo simulation, namely surrogate models. The basics of polynomial chaos expansions and low-rank tensor approximations are given together with hints on how to derive the statistics of interest, namely moments, sensitivity indices or probabilities of failure.


Journal ArticleDOI
TL;DR: An automatic, Bayesian, approach to parameter estimation based on adaptive Markov chain Monte Carlo sampling that assimilates non-invasive quantities commonly acquired in routine clinical care, quantifies the uncertainty in the estimated parameters and computes the confidence in local predicted hemodynamic indicators.

Journal ArticleDOI
TL;DR: This paper presents a high-dimensional uncertainty quantification algorithm from a big data perspective that can efficiently simulate some IC, MEMS, and photonic problems with over 50 independent random parameters, whereas the traditional algorithm can only deal with a small number of random parameters.
Abstract: Fabrication process variations are a major source of yield degradation in the nanoscale design of integrated circuits (ICs), microelectromechanical systems (MEMSs), and photonic circuits. Stochastic spectral methods are a promising technique to quantify the uncertainties caused by process variations. Despite their superior efficiency over Monte Carlo for many design cases, stochastic spectral methods suffer from the curse of dimensionality, i.e., their computational cost grows very fast as the number of random parameters increases. In order to solve this challenging problem, this paper presents a high-dimensional uncertainty quantification algorithm from a big data perspective. Specifically, we show that the huge number of (e.g., $1.5 \times 10^{27}$ ) simulation samples in standard stochastic collocation can be reduced to a very small one (e.g., 500) by exploiting some hidden structures of a high-dimensional data array. This idea is formulated as a tensor recovery problem with sparse and low-rank constraints, and it is solved with an alternating minimization approach. The numerical results show that our approach can efficiently simulate some IC, MEMS, and photonic problems with over 50 independent random parameters, whereas the traditional algorithm can only deal with a small number of random parameters.

Journal ArticleDOI
TL;DR: In this article, a new method based on fuzzy unscented transform and radial basis function neural networks (RBFNN) was proposed for possibilistic-PPF in the micro-grids including uncertain loads, correlated wind and solar distributed energy resources and plug-in hybrid electric vehicles.
Abstract: The probabilistic power flow (PPF) of active distribution networks and microgrids based on the conventional power flow algorithms is almost impossible or at least cumbersome Always, Mont Carlo simulation is a reliable solution However, its computation time is relatively high that makes it unattractive to be a reliable solution for large interconnected power systems This study presents a new method based on fuzzy unscented transform and radial basis function neural networks (RBFNN) for possibilistic-PPF in the microgrids including uncertain loads, correlated wind and solar distributed energy resources and plug-in hybrid electric vehicles When sufficient historical data of the system variables is not available, a probability density function might not be defined, while they must be represented in another way namely possibilistically When some of system uncertain variables are probabilistic and some are possibilistic, neither the conventional pure probabilistic nor pure possibilistic methods can be implemented Hence, a combined solution methodology is needed The proposed method exploits the ability of RBFNN and unscented transform in non-linear mapping with an acceptable level of accuracy, robustness and reliability Simulation results for the proposed PPF algorithm and its comparison with the reported methods for different test power systems reveals its efficiency, accuracy, robustness and authenticity

Journal ArticleDOI
Roland Schöbi1, Bruno Sudret1
TL;DR: In this paper, two types of probability-boxes are distinguished: free and parametric p-boxes, which account for both aleatory and epistemic uncertainty, and two-level approaches which use Kriging meta-models with adaptive experimental designs at different levels of the structural reliability analysis.

Journal ArticleDOI
TL;DR: In this article, a cost and error analysis of a multilevel estimator based on randomly shifted Quasi-Monte Carlo (QMC) lattice rules for lognormal diffusion problems is presented.
Abstract: In this paper we present a rigorous cost and error analysis of a multilevel estimator based on randomly shifted Quasi-Monte Carlo (QMC) lattice rules for lognormal diffusion problems. These problems are motivated by uncertainty quantification problems in subsurface flow. We extend the convergence analysis in [Graham et al., Numer. Math. 2014] to multilevel Quasi-Monte Carlo finite element discretizations and give a constructive proof of the dimension-independent convergence of the QMC rules. More precisely, we provide suitable parameters for the construction of such rules that yield the required variance reduction for the multilevel scheme to achieve an e-error with a cost of O(e � ) with θ < 2, and in practice even θ ≈ 1, for sufficiently fast decaying covariance kernels of the underlying Gaussian random field inputs. This confirms that the computational gains due to the application of multilevel sampling methods and the gains due to the application of QMC methods, both demonstrated in earlier works for the same model problem, are complementary. A series of numerical experiments confirms these gains. The results show that in practice the multilevel QMC method consistently outperforms both the multilevel MC method and the single-level variants even for non-smooth problems.