scispace - formally typeset
Search or ask a question

Showing papers on "Uncertainty quantification published in 2022"


Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a novel approach to estimate both aleatoric and epistemic uncertainties for stereo matching in an end-to-end way, where the uncertainty parameters are predicted for each potential disparity and then averaged via the guidance of matching probability distribution.

74 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors explored the fault diagnosis in a probabilistic Bayesian deep learning framework by exploiting an uncertainty-aware model to understand the unknown fault information and identify the inputs from unseen domains.

58 citations


Journal ArticleDOI
TL;DR: A novel hybrid method is fully data-driven and extends the forecasting capabilities of existing time-domain and machine learning-based methods for fatigue prediction, paving the way towards the development of a preventive system that provides real-time safety and operational instructions and insights for structural health monitoring purposes.

27 citations


Journal ArticleDOI
TL;DR: In this paper , a qualitative review on wind power forecasting uncertainty is presented, including the definition of uncertainty sources throughout the forecast modelling chain, which acts as a guiding line for checking and evaluating the uncertainty of a wind power forecast system/model.
Abstract: Wind power forecasting has supported operational decision-making for power system and electricity markets for 30 years. Efforts of improving the accuracy and/or certainty of deterministic or probabilistic wind power forecasts are continuously exerted by academics and industries. Forecast errors and associated uncertainties propagating through the whole forecasting chain, from weather provider to end user, cannot be eliminated completely. Therefore, understanding the uncertainty sources and how these uncertainties propagate throughout the modelling chain is significant to implement more rational and targeted uncertainty mitigation strategies and standardise the forecast and uncertainty validation. This paper presents a qualitative review on wind power forecasting uncertainty. First, the definition of uncertainty sources throughout the forecast modelling chain acts as a guiding line for checking and evaluating the uncertainty of a wind power forecast system/model. For each of the types of uncertainty sources, uncertainty mitigation strategies are provided, starting from the planning phase of wind farms, the establishment of a forecasting system through the operational phase and market phase. Our review finalises with a discussion on uncertainty validation with an example on ramp forecast validation. Highlights are a qualitative review and discussion including: (1) forecasting uncertainty exists and propagates everywhere throughout the entire modelling chain, from the planning phase to the market phase; (2) the mitigation efforts should be exerted in every modelling step; (3) standardised uncertainty validation practice, including why global data samples are required for forecasters to improve model performance and for forecast users to select and evaluate forecast model outputs.

26 citations


Journal ArticleDOI
TL;DR: In this article , a hybrid architecture of a fully connected artificial neural network (ANN) and Gaussian process regression (GPR) is proposed to ensure enhanced predictive abilities and simultaneous uncertainty quantification (UQ) of the predicted TtF.

24 citations


Journal ArticleDOI
TL;DR: In this article , the authors apply and evaluate three uncertainty quantification techniques for COVID-19 detection using chest X-ray (CXR) images and show that networks pertained on CXR images outperform networks pretrained on natural image datasets such as ImageNet.
Abstract: Deep neural networks (DNNs) have been widely applied for detecting COVID-19 in medical images. Existing studies mainly apply transfer learning and other data representation strategies to generate accurate point estimates. The generalization power of these networks is always questionable due to being developed using small datasets and failing to report their predictive confidence. Quantifying uncertainties associated with DNN predictions is a prerequisite for their trusted deployment in medical settings. Here we apply and evaluate three uncertainty quantification techniques for COVID-19 detection using chest X-Ray (CXR) images. The novel concept of uncertainty confusion matrix is proposed and new performance metrics for the objective evaluation of uncertainty estimates are introduced. Through comprehensive experiments, it is shown that networks pertained on CXR images outperform networks pretrained on natural image datasets such as ImageNet. Qualitatively and quantitatively evaluations also reveal that the predictive uncertainty estimates are statistically higher for erroneous predictions than correct predictions. Accordingly, uncertainty quantification methods are capable of flagging risky predictions with high uncertainty estimates. We also observe that ensemble methods more reliably capture uncertainties during the inference. DNN-based solutions for COVID-19 detection have been mainly proposed without any principled mechanism for risk mitigation. Previous studies have mainly focused on on generating single-valued predictions using pretrained DNNs. In this paper, we comprehensively apply and comparatively evaluate three uncertainty quantification techniques for COVID-19 detection using chest X-Ray images. The novel concept of uncertainty confusion matrix is proposed and new performance metrics for the objective evaluation of uncertainty estimates are introduced for the first time. Using these new uncertainty performance metrics, we quantitatively demonstrate when we could trust DNN predictions for COVID-19 detection from chest X-rays. It is important to note the proposed novel uncertainty evaluation metrics are generic and could be applied for evaluation of probabilistic forecasts in all classification problems.

24 citations


Journal ArticleDOI
TL;DR: The results demonstrate that the proposed PIML model can reduce the computational time of SGSIM by several orders of magnitude while similar results can be produced in a matter of seconds.
Abstract: Sequential Gaussian Simulation (SGSIM) as a stochastic method has been developed to avoid the smoothing effect produced in deterministic methods by generating various stochastic realizations. One of the main issues of this technique is, however, an intensive computation related to the inverse operation in solving the Kriging system, which significantly limits its application when several realizations need to be produced for uncertainty quantification. In this paper, a physics-informed machine learning (PIML) model is proposed to improve the computational efficiency of the SGSIM. To this end, only a small amount of data produced by SGSIM are used as the training dataset based on which the model can discover the spatial correlations between available data and unsampled points. To achieve this, the governing equations of the SGSIM algorithm are incorporated into our proposed network. The quality of realizations produced by the PIML model is compared for both 2D and 3D cases, visually and quantitatively. Furthermore, computational performance is evaluated on different grid sizes. Our results demonstrate that the proposed PIML model can reduce the computational time of SGSIM by several orders of magnitude while similar results can be produced in a matter of seconds.

24 citations


Journal ArticleDOI
TL;DR: In this paper , a multi-level uncertainty propagation approach is proposed to reduce the problem dimensionality and a variables-grouping approach is used for reducing the number of model evaluations for uncertainty propagation.

23 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed a highly efficient deep-learning surrogate framework that is able to accurately predict the response of bodies undergoing large deformations in real-time, which is trained with force-displacement data obtained with the finite element method.

23 citations


Journal ArticleDOI
01 Jan 2022
TL;DR: GMM-Det as discussed by the authors is a real-time method for extracting epistemic uncertainty from object detectors to identify and reject open-set errors, where the detector produces a structured logit space that is modelled with class-specific Gaussian Mixture Models.
Abstract: Deployed into an open world, object detectors are prone to open-set errors, false positive detections of object classes not present in the training dataset.We propose GMM-Det, a real-time method for extracting epistemic uncertainty from object detectors to identify and reject open-set errors. GMM-Det trains the detector to produce a structured logit space that is modelled with class-specific Gaussian Mixture Models. At test time, open-set errors are identified by their low log-probability under all Gaussian Mixture Models. We test two common detector architectures, Faster R-CNN and RetinaNet, across three varied datasets spanning robotics and computer vision. Our results show that GMM-Det consistently outperforms existing uncertainty techniques for identifying and rejecting open-set detections, especially at the low-error-rate operating point required for safety-critical applications. GMM-Det maintains object detection performance, and introduces only minimal computational overhead. We also introduce a methodology for converting existing object detection datasets into specific open-set datasets to evaluate open-set performance in object detection.

21 citations


Journal ArticleDOI
TL;DR: In this article , an additional hyper distribution is proposed to characterize the uncertainty of prediction error variances across different datasets, and asymptotic approximations are integrated into the hierarchical Bayesian modeling (HBM) framework to simplify the computation of the posterior distribution of hyper-parameters.

Journal ArticleDOI
TL;DR: In this paper , an adaptive Kriging sampling strategy based on the Classification Uncertainty Quantification (KCUQ) was proposed to reduce the computational cost of conventional RBDO methods.

Journal ArticleDOI
TL;DR: The proposed methodology to fulfil the challenging expectation in stochastic model updating to calibrate the probabilistic distributions of parameters without any assumption about the distribution formats is developed by employing staircase random variables and the Bhattacharyya distance.

Journal ArticleDOI
TL;DR: In this paper , an approximate Bayesian computation model updating framework is developed by employing staircase random variables and the Bhattacharyya distance to calibrate the probabilistic distributions of parameters without any assumption about the distribution formats.

Journal ArticleDOI
TL;DR: In this paper , the authors present a framework for addressing a variety of engineering design challenges with limited empirical data and partial information, including guidance on the characterisation of a mixture of uncertainties, efficient methodologies to integrate data into design decisions, and to conduct reliability analysis, and risk/reliability based design optimisation.

Journal ArticleDOI
TL;DR: This framework includes guidance on the characterisation of a mixture of uncertainties, efficient methodologies to integrate data into design decisions, and to conduct reliability analysis, and risk/reliability based design optimisation.

Journal ArticleDOI
TL;DR: In this article , the structural responses of several mechanical systems are analyzed using their basic probabilistic characteristics, which have been validated using the Probabilistic semi-analytical approach, and also the crude Monte-Carlo simulation.

Journal ArticleDOI
TL;DR: In this article, a complete review of uncertainty categorization and several techniques to address the uncertainty in power systems, along with the merits and weaknesses of each technique are presented, and challenges have been highlighted for future research directions.

Journal ArticleDOI
TL;DR: In this article , the authors present an extensive review of uncertainty classification and different uncertainty handling approaches in power systems along with the pros and cons of each method, including probabilistic power flow (PPF), analytical methods (AMs), and approximate methods (APMs).

Journal ArticleDOI
01 Dec 2022
TL;DR: In this paper , the authors model multivariate uncertainty for regression problems with neural networks and train a deep uncertainty covariance matrix model in two ways: directly using a multivariate Gaussian density loss function and indirectly using end-to-end training through a Kalman filter.
Abstract: Deep learning has the potential to dramatically impact navigation and tracking state estimation problems critical to autonomous vehicles and robotics. Measurement uncertainties in state estimation systems based on Kalman and other Bayes filters are typically assumed to be a fixed covariance matrix. This assumption is risky, particularly for “black box” deep learning models, in which uncertainty can vary dramatically and unexpectedly. Accurate quantification of multivariate uncertainty will allow for the full potential of deep learning to be used more safely and reliably in these applications. We show how to model multivariate uncertainty for regression problems with neural networks, incorporating both aleatoric and epistemic sources of heteroscedastic uncertainty. We train a deep uncertainty covariance matrix model in two ways: directly using a multivariate Gaussian density loss function and indirectly using end-to-end training through a Kalman filter. We experimentally show in a visual tracking problem the large impact that accurate multivariate uncertainty quantification can have on the Kalman filter performance for both in-domain and out-of-domain evaluation data. We additionally show, in a challenging visual odometry problem, how end-to-end filter training can allow uncertainty predictions to compensate for filter weaknesses.

Journal ArticleDOI
TL;DR: In this paper , a probabilistic deep leaning methodology for uncertainty quantification of multi-component systems' RUL is presented, which is a combination of a probablistic model and a deep recurrent neural network to predict the component's RUL distributions.

Journal ArticleDOI
TL;DR: Analysis tools using concepts developed in meteorology and machine learning for the validation of probabilistic forecasters are considered and adapted to CC-UQ and applied to datasets of prediction uncertainties provided by composite methods, Bayesian ensembles methods, and machinelearning and a posteriori statistical methods.
Abstract: Uncertainty quantification (UQ) in computational chemistry (CC) is still in its infancy. Very few CC methods are designed to provide a confidence level on their predictions, and most users still rely improperly on the mean absolute error as an accuracy metric. The development of reliable UQ methods is essential, notably for CC to be used confidently in industrial processes. A review of the CC-UQ literature shows that there is no common standard procedure to report or validate prediction uncertainty. I consider here analysis tools using concepts (calibration and sharpness) developed in meteorology and machine learning for the validation of probabilistic forecasters. These tools are adapted to CC-UQ and applied to datasets of prediction uncertainties provided by composite methods, Bayesian ensembles methods, and machine learning and a posteriori statistical methods.

Journal ArticleDOI
TL;DR: This work aims to bridge the gap by expanding the capabilities of Bayesian DGP posterior inference through the incorporation of the Vecchia approximation, allowing linear computational scaling without compromising accuracy or UQ.
Abstract: Deep Gaussian processes (DGPs) upgrade ordinary GPs through functional composition, in which intermediate GP layers warp the original inputs, providing flexibility to model non-stationary dynamics. Two DGP regimes have emerged in recent literature. A “big data” regime, prevalent in machine learning, favors approximate, optimization-based inference for fast, high-fidelity prediction. A “small data” regime, preferred for computer surrogate modeling, deploys posterior integration for enhanced uncertainty quantification (UQ). We aim to bridge this gap by expanding the capabilities of Bayesian DGP posterior inference through the incorporation of the Vecchia approximation, allowing linear computational scaling without compromising accuracy or UQ. We are motivated by surrogate modeling of simulation campaigns with upwards of 100,000 runs – a size too large for previous fully-Bayesian implementations – and demonstrate prediction and UQ superior to that of “big data” competitors. All methods are implemented in the deepgp package on CRAN.

Journal ArticleDOI
TL;DR: In this paper, the authors presented Bayesian model updating and identifiability analysis of nonlinear finite element (FE) models with a specific testbed civil structure, Pine Flat concrete gravity dam, as illustration example.

Journal ArticleDOI
TL;DR: In this article , the authors presented Bayesian model updating and identifiability analysis of nonlinear finite element (FE) models with a specific testbed civil structure, Pine Flat concrete gravity dam, as illustration example.

Journal ArticleDOI
TL;DR: In this paper, a hierarchical probabilistic model for Lamb wave detection is formulated in the Bayesian framework, where uncertainties from the model choice, model parameters, and other variables can be explicitly incorporated using the proposed method.

Journal ArticleDOI
TL;DR: In this paper , an unsupervised Bayesian learning framework for discovery of parsimonious and interpretable constitutive laws with quantifiable uncertainties is proposed, where the authors leverage domain knowledge by including features based on existing, both physics-based and phenomenological, constitutive models.

Journal ArticleDOI
TL;DR: In this article , a new time-domain probabilistic technique based on hierarchical Bayesian modeling (HBM) framework is proposed for calibration and uncertainty quantification of hysteretic type nonlinearities of dynamical systems.

Journal ArticleDOI
TL;DR: In this article , a non-Gaussian stochastic model based on polynomial chaos and fractional moments is developed. But the model is not suitable for real-world problems.

Journal ArticleDOI
TL;DR: In this paper , the authors explore the uncertainty in regression neural networks to construct the prediction intervals, and they design a novel loss function, which enables them to learn uncertainty without uncertainty labels.