scispace - formally typeset
Search or ask a question

Showing papers on "Bayesian probability published in 2022"


Journal ArticleDOI
TL;DR: Bayesian statistics offer a formalism to understand and quantify the uncertainty associated with deep neural network predictions as mentioned in this paper , and a complete toolset to design, implement, train, use and evaluate Bayesian neural networks.
Abstract: Modern deep learning methods constitute incredibly powerful tools to tackle a myriad of challenging problems. However, since deep learning methods operate as black boxes, the uncertainty associated with their predictions is often challenging to quantify. Bayesian statistics offer a formalism to understand and quantify the uncertainty associated with deep neural network predictions. This tutorial provides deep learning practitioners with an overview of the relevant literature and a complete toolset to design, implement, train, use and evaluate Bayesian neural networks, i . e ., stochastic artificial neural networks trained using Bayesian methods.

68 citations


Journal ArticleDOI
TL;DR: In this paper , a classification method for computed tomography chest images in the COVID-19 Radiography Database using features extracted by popular Convolutional Neural Networks (CNN) models was presented, and the determination of hyperparameters of Machine Learning (ML) algorithms by Bayesian optimization, and ANN-based image segmentation are the two main contributions.

52 citations


Journal ArticleDOI
01 Aug 2022-MethodsX
TL;DR: The Bayesian Mindsponge Framework (BMF) as mentioned in this paper is a new analytical method for investigating socio, psychological, and behavioral phenomena, which combines the combination of the mindsponge mechanism's conceptual formulation power and Bayesian analysis's inferential advantages.

46 citations


Journal ArticleDOI
TL;DR: In this article , a Bayesian optimization-based convolutional neural network (CNN) model was proposed for the recognition of chest X-ray images of COVID-19 artefacts in real world situations.

46 citations


Journal ArticleDOI
TL;DR: In this paper , the authors combined principled statistical methods with a framework based on catastrophe theory and approximate Bayesian computation to formulate a quantitative dynamical landscape that accurately predicts cell fate outcomes.
Abstract: Fate decisions in developing tissues involve cells transitioning between discrete cell states, each defined by distinct gene expression profiles. The Waddington landscape, in which the development of a cell is viewed as a ball rolling through a valley filled terrain, is an appealing way to describe differentiation. To construct and validate accurate landscapes, quantitative methods based on experimental data are necessary. We combined principled statistical methods with a framework based on catastrophe theory and approximate Bayesian computation to formulate a quantitative dynamical landscape that accurately predicts cell fate outcomes of pluripotent stem cells exposed to different combinations of signaling factors. Analysis of the landscape revealed two distinct ways in which cells make a binary choice between one of two fates. We suggest that these represent archetypal designs for developmental decisions. The approach is broadly applicable for the quantitative analysis of differentiation and for determining the logic of developmental decisions.

41 citations


Journal ArticleDOI
TL;DR: This study presents a Bayesian dynamic regression (BDR) method to reconstruct the missing SHM data and shows that the multivariate BDR model exhibits excellent performance to rebuild the missing data in terms of both computational efficiency and accuracy.
Abstract: Massive data that provide valuable information regarding the structural behavior are continuously collected by the structural health monitoring (SHM) system. The quality of monitoring data is directly related to the accuracy of the structural condition assessment and maintenance decisions. Data missing is a common and challenging issue in SHM, compromising the reliability of data-driven methods. Thus, the accurate reconstruction of missing SHM data is an essential step for the reliable evaluation of the structural condition. Data recovery can be considered as a regression task by modeling the correlation among sensors. The Bayesian linear regression (BLR) model has been extensively used in probabilistic regression analysis due to its efficiency and the ability of uncertainty quantification. However, because of the fixed coefficients (refer to a static model) and linear assumption, the BLR model fails to accurately capture the relationship and accommodate the changes in related variables. Given this limitation, this study presents a Bayesian dynamic regression (BDR) method to reconstruct the missing SHM data. The BDR model assumes that the linear form is only locally suitable, and the regression variable varies according to a random walk. In particular, the multivariate BDR model can reconstruct the missing data of different sensors simultaneously. The Kalman filter and expectation maximum (EM) algorithms are employed to estimate the state variables (regressors) and parameters. The feasibility of the multivariate BDR model is demonstrated by utilizing the data from a building model and a long-span cable-stayed bridge. The results show that the multivariate BDR model exhibits excellent performance to rebuild the missing data in terms of both computational efficiency and accuracy. Compared to the standard BLR and linear BDR models, the quadratic BDR model owns better reconstruction accuracy.

36 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a stable image encryption scheme to create visually secure cipher image by using new fractional-order chaotic map, Bayesian compressive sensing and DVT embedding.

34 citations


Journal ArticleDOI
TL;DR: The findings indicate that the majority of hyper-parameter tuning tasks exhibit heteroscedasticity and non-stationarity, multiobjective acquisition ensembles with Pareto front solutions improve queried configurations, and robust acquisition maximisers afford empirical advantages relative to their non-robust counterparts.
Abstract: In this work we rigorously analyse assumptions inherent to black-box optimisation hyper-parameter tuning tasks. Our results on the Bayesmark benchmark indicate that heteroscedasticity and non-stationarity pose significant challenges for black-box optimisers. Based on these findings, we propose a Heteroscedastic and Evolutionary Bayesian Optimisation solver (HEBO). HEBO performs non-linear input and output warping, admits exact marginal log-likelihood optimisation and is robust to the values of learned parameters. We demonstrate HEBO’s empirical efficacy on the NeurIPS 2020 Black-Box Optimisation challenge, where HEBO placed first. Upon further analysis, we observe that HEBO significantly outperforms existing black-box optimisers on 108 machine learning hyperparameter tuning tasks comprising the Bayesmark benchmark. Our findings indicate that the majority of hyper-parameter tuning tasks exhibit heteroscedasticity and non-stationarity, multiobjective acquisition ensembles with Pareto front solutions improve queried configurations, and robust acquisition maximisers afford empirical advantages relative to their non-robust counterparts. We hope these findings may serve as guiding principles for practitioners of Bayesian optimisation.

32 citations


Journal ArticleDOI
TL;DR: In this paper, the authors model the entire process of data generation to processing, model updating and reliability calculation, and investigate it on a deteriorating bridge system, assuming that dynamic response data are obtained in a sequential fashion from deployed accelerometers, subsequently processed by an output-only operational modal analysis scheme for identifying the system's modal characteristics.

32 citations


Posted ContentDOI
26 Sep 2022-bioRxiv
TL;DR: Approaches are described that exploit the increased signal-to-noise ratio in the averaged structure to optimise tilt series alignments, beam-induced motions of the particles throughout the tilt series acquisition, defoci of the individual particles, as well as higher-order optical aberrations of the microscope.
Abstract: We present a new approach for macromolecular structure determination from multiple particles in electron cryo-tomography (cryo-ET) data sets. Whereas existing subtomogram averaging approaches are based on 3D data models, we propose to optimise a regularised likelihood target that approximates a function of the 2D experimental images. In addition, analogous to Bayesian polishing and contrast transfer function (CTF) refinement in single-particle analysis, we describe approaches that exploit the increased signal-to-noise ratio in the averaged structure to optimise tilt series alignments, beam-induced motions of the particles throughout the tilt series acquisition, defoci of the individual particles, as well as higher-order optical aberrations of the microscope. Implementation of our approaches in the open-source software package RELION aims to facilitate their general use, in particular for those researchers who are already familiar with its single-particle analysis tools. We illustrate for three applications that our approaches allow structure determination from cryo-ET data to resolutions sufficient for de novo atomic modelling.

32 citations


Journal ArticleDOI
TL;DR: In this paper , the Scientific Committee (SC) reconfirms that the benchmark dose (BMD) approach is a scientifically more advanced method compared to the no-observed-adverse-effect level (NOAEL) approach for deriving a Reference Point (RP).
Abstract: Abstract The Scientific Committee (SC) reconfirms that the benchmark dose (BMD) approach is a scientifically more advanced method compared to the no‐observed‐adverse‐effect‐level (NOAEL) approach for deriving a Reference Point (RP). The major change compared to the previous Guidance (EFSA SC, 2017) concerns the Section 2.5, in which a change from the frequentist to the Bayesian paradigm is recommended. In the former, uncertainty about the unknown parameters is measured by confidence and significance levels, interpreted and calibrated under hypothetical repetition, while probability distributions are attached to the unknown parameters in the Bayesian approach, and the notion of probability is extended to reflect uncertainty of knowledge. In addition, the Bayesian approach can mimic a learning process and reflects the accumulation of knowledge over time. Model averaging is again recommended as the preferred method for estimating the BMD and calculating its credible interval. The set of default models to be used for BMD analysis has been reviewed and amended so that there is now a single set of models for quantal and continuous data. The flow chart guiding the reader step‐by‐step when performing a BMD analysis has also been updated, and a chapter comparing the frequentist to the Bayesian paradigm inserted. Also, when using Bayesian BMD modelling, the lower bound (BMDL) is to be considered as potential RP, and the upper bound (BMDU) is needed for establishing the BMDU/BMDL ratio reflecting the uncertainty in the BMD estimate. This updated guidance does not call for a general re‐evaluation of previous assessments where the NOAEL approach or the BMD approach as described in the 2009 or 2017 Guidance was used, in particular when the exposure is clearly lower (e.g. more than one order of magnitude) than the health‐based guidance value. Finally, the SC firmly reiterates to reconsider test guidelines given the wide application of the BMD approach.

Journal ArticleDOI
TL;DR: In this paper , a Bayesian model averaging (BMA) was used to quantify the uncertainty of model parameters and inputs simultaneously, and the results indicated that BMA using multiple adaptive neuro-fuzzy interface system (ANFIS) and multi-layer perceptron (MLP) was useful for predicting tomato yield.

Journal ArticleDOI
TL;DR: In this paper , a probabilistic crop yield prediction framework is presented, which employs the Bayesian Model Averaging (BMA) and a set of Copula functions to integrate the outputs of multiple deep neural networks, including the 3DCNN (3D Convolutional Neural Network) and ConvLSTM (Convolutional Long Short-Term Memory).

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a tensor completion framework which can simultaneously take advantage of the global-local-nonlocal priors to achieve state-of-the-art performance both quantitatively and qualitatively.
Abstract: Completing missing entries in multidimensional visual data is a typical ill-posed problem that requires appropriate exploitation of prior information of the underlying data. Commonly used priors can be roughly categorized into three classes: global tensor low-rankness, local properties, and nonlocal self-similarity (NSS); most existing works utilize one or two of them to implement completion. Naturally, there arises an interesting question: can one concurrently make use of multiple priors in a unified way, such that they can collaborate with each other to achieve better performance? This work gives a positive answer by formulating a novel tensor completion framework which can simultaneously take advantage of the global-local-nonlocal priors. In the proposed framework, the tensor train (TT) rank is adopted to characterize the global correlation; meanwhile, two Plug-and-Play (PnP) denoisers, including a convolutional neural network (CNN) denoiser and the color block-matching and 3 D filtering (CBM3D) denoiser, are incorporated to preserve local details and exploit NSS, respectively. Then, we design a proximal alternating minimization algorithm to efficiently solve this model under the PnP framework. Under mild conditions, we establish the convergence guarantee of the proposed algorithm. Extensive experiments show that these priors organically benefit from each other to achieve state-of-the-art performance both quantitatively and qualitatively.

Journal ArticleDOI
TL;DR: In this article , the authors model the entire process of data generation to processing, model updating and reliability calculation, and investigate it on a deteriorating bridge system, assuming that dynamic response data are obtained in a sequential fashion from deployed accelerometers, subsequently processed by an output-only operational modal analysis scheme for identifying the system's modal characteristics.

Journal ArticleDOI
TL;DR: It is demonstrated that RoBMA finds evidence for the absence of publication bias in Registered Replication Reports and reliably avoids false positives and is relatively robust to model misspecification and simulations show that it outperforms existing methods.
Abstract: Meta-analysis is an important quantitative tool for cumulative science, but its application is frustrated by publication bias. In order to test and adjust for publication bias, we extend model-averaged Bayesian meta-analysis with selection models. The resulting robust Bayesian meta-analysis (RoBMA) methodology does not require all-or-none decisions about the presence of publication bias, can quantify evidence in favor of the absence of publication bias, and performs well under high heterogeneity. By model-averaging over a set of 12 models, RoBMA is relatively robust to model misspecification and simulations show that it outperforms existing methods. We demonstrate that RoBMA finds evidence for the absence of publication bias in Registered Replication Reports and reliably avoids false positives. We provide an implementation in R so that researchers can easily use the new methodology in practice. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

Journal ArticleDOI
TL;DR: Niehaus et al. as mentioned in this paper used a series of experiments to understand whether and how people's beliefs about their own abilities are biased relative to the Bayesian benchmark and how these beliefs then affect behavior.
Abstract: We use a series of experiments to understand whether and how people’s beliefs about their own abilities are biased relative to the Bayesian benchmark and how these beliefs then affect behavior. We find that subjects systematically and substantially overweight positive feedback relative to negative (asymmetry) and also update too little overall (conservatism). These biases are substantially less pronounced in an ego-free control experiment. Updating does retain enough of the structure of Bayes’ rule to let us model it coherently in an optimizing framework, in which, interestingly, asymmetry and conservatism emerge as complementary biases. We also find that exogenous changes in beliefs affect subjects’ decisions to enter into a competition and do so similarly for more and less biased subjects, suggesting that people cannot “undo” their biases when the time comes to decide. This paper was accepted by Axel Ockenfels, behavioral economics and decision analysis. Funding: Financial support from the National Science Foundation (NSF), Harvard University, and Wesleyan University is gratefully acknowledged. P. Niehaus received financial support from an NSF Graduate Research Fellowship. Supplemental Material: The data files are available at https://doi.org/10.1287/mnsc.2021.4294 .

Journal ArticleDOI
TL;DR: The state of the art in the literature on the free energy principle is reviewed, distinguishing between three ways in which Bayesian mechanics has been applied to particular systems (i.e., path-tracking, mode- tracking, and mode-matching).
Abstract: The aim of this paper is to introduce a field of study that has emerged over the last decade, called Bayesian mechanics. Bayesian mechanics is a probabilistic mechanics, comprising tools that enable us to model systems endowed with a particular partition (i.e. into particles), where the internal states (or the trajectories of internal states) of a particular system encode the parameters of beliefs about external states (or their trajectories). These tools allow us to write down mechanical theories for systems that look as if they are estimating posterior probability distributions over the causes of their sensory states. This provides a formal language for modelling the constraints, forces, potentials and other quantities determining the dynamics of such systems, especially as they entail dynamics on a space of beliefs (i.e. on a statistical manifold). Here, we will review the state of the art in the literature on the free energy principle, distinguishing between three ways in which Bayesian mechanics has been applied to particular systems (i.e. path-tracking, mode-tracking and mode-matching). We go on to examine a duality between the free energy principle and the constrained maximum entropy principle, both of which lie at the heart of Bayesian mechanics, and discuss its implications.

Journal ArticleDOI
TL;DR: In this paper , a Bayesian dynamic extreme value model is proposed for conflict-based real-time safety analysis, which integrates the newer data with prior information to recursively update the model parameters and react to sudden trend changes.

Journal ArticleDOI
TL;DR: In this article , a weak label based Bayesian U-Net was proposed to segment the optic disc in fundus images, where a probabilistic graphical model and a Bayesian approach with the state-of-the-art U-net framework were explored.

Journal ArticleDOI
TL;DR: Increase the signal sampling frequency for AE event identification, use a real-time inverted 3D velocity model and update the PATSEs in real time could be used to further improve the AE event location accuracy.

Journal ArticleDOI
TL;DR: A generalized Wiener process-based degradation model with an adaptive drift to characterize the degradation behavior exhibiting nonlinearity, temporal uncertainty, item-to-item variability, and time-varying degradation is proposed.

Journal ArticleDOI
26 Apr 2022-Symmetry
TL;DR: This paper presents a new univariate flexible generator of distributions, namely, the odd Perks-G class, a novel discrete distribution class and presents a novel log-location-scale regression model based on the even Perks–Weibull distribution.
Abstract: In this paper, we present a new univariate flexible generator of distributions, namely, the odd Perks-G class. Some special models in this class are introduced. The quantile function (QFUN), ordinary and incomplete moments (MOMs), generating function (GFUN), moments of residual and reversed residual lifetimes (RLT), and four different types of entropy are all structural aspects of the proposed family that hold for any baseline model. Maximum likelihood (ML) and maximum product spacing (MPS) estimates of the model parameters are given. Bayesian estimates of the model parameters are obtained. We also present a novel log-location-scale regression model based on the odd Perks–Weibull distribution. Due to the significance of the odd Perks-G family and the survival discretization method, both are used to introduce the discrete odd Perks-G family, a novel discrete distribution class. Real-world data sets are used to emphasize the importance and applicability of the proposed models.

Journal ArticleDOI
TL;DR: In this article , the authors use an active inference Markov decision process model (a Bayes-optimal decision-making agent) to perform a simple task involving social and non-social inferences.

Proceedings ArticleDOI
01 Jan 2022
TL;DR: In this article , a cross-scale normalizing flow (CS-Flow) is proposed for image-level defect detection, which jointly processes multiple feature maps of different scales to assign meaningful likelihoods to input samples.
Abstract: In industrial manufacturing processes, errors frequently occur at unpredictable times and in unknown manifestations. We tackle the problem of automatic defect detection without requiring any image samples of defective parts. Recent works model the distribution of defect-free image data, using either strong statistical priors or overly simplified data representations. In contrast, our approach handles fine-grained representations incorporating the global and local image context while flexibly estimating the density. To this end, we propose a novel fully convolutional cross-scale normalizing flow (CS-Flow) that jointly processes multiple feature maps of different scales. Using normalizing flows to assign meaningful likelihoods to input samples allows for efficient defect detection on image-level. Moreover, due to the preserved spatial arrangement the latent space of the normalizing flow is interpretable which enables to localize defective regions in the image. Our work sets a new state-of-the-art in image-level defect detection on the benchmark datasets Magnetic Tile Defects and MVTec AD showing a 100% AUROC on 4 out of 15 classes.

Journal ArticleDOI
TL;DR: This tutorial aims to introduce Bayesian Networks to identify admissible causal relationships in cross-sectional data, as well as how to estimate these models in R through three algorithm families with an empirical example data set of depressive symptoms.
Abstract: Bayesian Networks are probabilistic graphical models that represent conditional independence relationships among variables as a directed acyclic graph (DAG), where edges can be interpreted as causal effects connecting one causal symptom to an effect symptom. These models can help overcome one of the key limitations of partial correlation networks whose edges are undirected. This tutorial aims to introduce Bayesian Networks to identify admissible causal relationships in cross-sectional data, as well as how to estimate these models in R through three algorithm families with an empirical example data set of depressive symptoms. In addition, we discuss common problems and questions related to Bayesian networks. We recommend Bayesian networks be investigated to gain causal insight in psychological data. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

Journal ArticleDOI
TL;DR: In this article , a generalized Wiener process-based degradation model with an adaptive drift is proposed to characterize the degradation behavior exhibiting nonlinearity, temporal uncertainty, item-to-item variability, and time-varying degradation.

Journal ArticleDOI
TL;DR: A Bayesian modeling framework is used to systematically investigate the factors that influence the performance of hybrid combinations of human and machine classifiers while taking into account the unique ways human and algorithmic confidence is expressed.
Abstract: Significance With the increase in artificial intelligence in real-world applications, there is interest in building hybrid systems that take both human and machine predictions into account. Previous work has shown the benefits of separately combining the predictions of diverse machine classifiers or groups of people. Using a Bayesian modeling framework, we extend these results by systematically investigating the factors that influence the performance of hybrid combinations of human and machine classifiers while taking into account the unique ways human and algorithmic confidence is expressed.

Journal ArticleDOI
TL;DR: In this article , a transfer learning technique realized by domain adaptation is used to bridge the gap between the biased numerical model and the real structure and to guide the model updating process so that the updated model can accurately indicate the damage state.

Journal ArticleDOI
TL;DR: In this article, a methodology that applies pattern recognition methods to guide Bayesian model updating (BMU) and supervise the identification of structural damage is proposed, where the transfer learning technique realized by domain adaptation is used to bridge the gap between the biased numerical model and the real structure and to guide the model updating process.