scispace - formally typeset
Search or ask a question

Showing papers on "Bayesian inference published in 2021"


Journal ArticleDOI
TL;DR: These guidelines are geared towards analyses performed with the open-source statistical software JASP, and most guidelines extend to Bayesian inference in general.
Abstract: Despite the increasing popularity of Bayesian inference in empirical research, few practical guidelines provide detailed recommendations for how to apply Bayesian procedures and interpret the results. Here we offer specific guidelines for four different stages of Bayesian statistical reasoning in a research setting: planning the analysis, executing the analysis, interpreting the results, and reporting the results. The guidelines for each stage are illustrated with a running example. Although the guidelines are geared towards analyses performed with the open-source statistical software JASP, most guidelines extend to Bayesian inference in general.

378 citations


Journal ArticleDOI
TL;DR: A Bayesian temporal factorization (BTF) framework for modeling multidimensional time series---in particular spatiotemporal data---in the presence of missing values is proposed by integrating low-rank matrix/tensor factorization and vector autoregressive process into a single probabilistic graphical model.
Abstract: Large-scale and multidimensional spatiotemporal data sets are becoming ubiquitous in many real-world applications such as monitoring urban traffic and air quality. Making predictions on these time series has become a critical challenge due to not only the large-scale and high-dimensional nature but also the considerable amount of missing data. In this paper, we propose a Bayesian temporal factorization (BTF) framework for modeling multidimensional time series---in particular spatiotemporal data---in the presence of missing values. By integrating low-rank matrix/tensor factorization and vector autoregressive (VAR) process into a single probabilistic graphical model, this framework can characterize both global and local consistencies in large-scale time series data. The graphical model allows us to effectively perform probabilistic predictions and produce uncertainty estimates without imputing those missing values. We develop efficient Gibbs sampling algorithms for model inference and model updating for real-time prediction, and test the proposed BTF framework on several real-world spatiotemporal data sets for both missing data imputation and multi-step rolling prediction tasks. The numerical experiments demonstrate the superiority of the proposed BTF approaches over existing state-of-the-art methods.

111 citations


Journal ArticleDOI
TL;DR: This formulation of affective inference offers a principled account of the link between affect, (mental) action, and implicit metacognition and characterizes how a deep biological system can infer its affective state and reduce uncertainty about such inferences through internal action.
Abstract: The positive-negative axis of emotional valence has long been recognized as fundamental to adaptive behavior, but its origin and underlying function have largely eluded formal theorizing and computational modeling. Using deep active inference, a hierarchical inference scheme that rests on inverting a model of how sensory data are generated, we develop a principled Bayesian model of emotional valence. This formulation asserts that agents infer their valence state based on the expected precision of their action model-an internal estimate of overall model fitness ("subjective fitness"). This index of subjective fitness can be estimated within any environment and exploits the domain generality of second-order beliefs (beliefs about beliefs). We show how maintaining internal valence representations allows the ensuing affective agent to optimize confidence in action selection preemptively. Valence representations can in turn be optimized by leveraging the (Bayes-optimal) updating term for subjective fitness, which we label affective charge (AC). AC tracks changes in fitness estimates and lends a sign to otherwise unsigned divergences between predictions and outcomes. We simulate the resulting affective inference by subjecting an in silico affective agent to a T-maze paradigm requiring context learning, followed by context reversal. This formulation of affective inference offers a principled account of the link between affect, (mental) action, and implicit metacognition. It characterizes how a deep biological system can infer its affective state and reduce uncertainty about such inferences through internal action (i.e., top-down modulation of priors that underwrite confidence). Thus, we demonstrate the potential of active inference to provide a formal and computationally tractable account of affect. Our demonstration of the face validity and potential utility of this formulation represents the first step within a larger research program. Next, this model can be leveraged to test the hypothesized role of valence by fitting the model to behavioral and neuronal responses.

92 citations


Journal ArticleDOI
TL;DR: A principled Bayesian workflow is introduced that provides guidelines and checks for valid data analysis, avoiding overfitting complex models to noise, and capturing relevant data structure in a probabilistic model.
Abstract: Experiments in research on memory, language, and in other areas of cognitive science are increasingly being analyzed using Bayesian methods. This has been facilitated by the development of probabilistic programming languages such as Stan, and easily accessible front-end packages such as brms. The utility of Bayesian methods, however, ultimately depends on the relevance of the Bayesian model, in particular whether or not it accurately captures the structure of the data and the data analyst's domain expertise. Even with powerful software, the analyst is responsible for verifying the utility of their model. To demonstrate this point, we introduce a principled Bayesian workflow (Betancourt, 2018) to cognitive science. Using a concrete working example, we describe basic questions one should ask about the model: prior predictive checks, computational faithfulness, model sensitivity, and posterior predictive checks. The running example for demonstrating the workflow is data on reading times with a linguistic manipulation of object versus subject relative clause sentences. This principled Bayesian workflow also demonstrates how to use domain knowledge to inform prior distributions. It provides guidelines and checks for valid data analysis, avoiding overfitting complex models to noise, and capturing relevant data structure in a probabilistic model. Given the increasing use of Bayesian methods, we aim to discuss how these methods can be properly employed to obtain robust answers to scientific questions. All data and code accompanying this article are available from https://osf.io/b2vx9/. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

82 citations


Journal ArticleDOI
Osamu Hirose1
TL;DR: The formulation of coherent point drift in a Bayesian setting brings the following consequences and advances to the field: convergence of the algorithm is guaranteed by variational Bayesian inference; the definition of motion coherence as a prior distribution provides a basis for interpretation of the parameters.
Abstract: Coherent point drift is a well-known algorithm for solving point set registration problems, i.e., finding corresponding points between shapes represented as point sets. Despite its advantages over other state-of-the-art algorithms, theoretical and practical issues remain. Among theoretical issues, (1) it is unknown whether the algorithm always converges, and (2) the meaning of the parameters concerning motion coherence is unclear. Among practical issues, (3) the algorithm is relatively sensitive to target shape rotation, and (4) acceleration of the algorithm is restricted to the use of the Gaussian kernel. To overcome these issues and provide a different and more general perspective to the algorithm, we formulate coherent point drift in a Bayesian setting. The formulation brings the following consequences and advances to the field: convergence of the algorithm is guaranteed by variational Bayesian inference; the definition of motion coherence as a prior distribution provides a basis for interpretation of the parameters; rigid and non-rigid registration can be performed in a single algorithm, enhancing robustness against target rotation. We also propose an acceleration scheme for the algorithm that can be applied to non-Gaussian kernels and that provides greater efficiency than coherent point drift.

79 citations


Journal ArticleDOI
TL;DR: UltraNest as discussed by the authors is a general-purpose Bayesian inference package for parameter estimation and model comparison that allows fitting arbitrary models specified as likelihood functions written in Python, C, C++, Fortran, Julia or R with a focus on correctness and speed.
Abstract: UltraNest is a general-purpose Bayesian inference package for parameter estimation and model comparison It allows fitting arbitrary models specified as likelihood functions written in Python, C, C++, Fortran, Julia or R With a focus on correctness and speed (in that order), UltraNest is especially useful for multi-modal or non-Gaussian parameter spaces, computational expensive models, in robust pipelines Parallelisation to computing clusters and resuming incomplete runs is available

76 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide an accessible overview of the discrete-state formulation of active inference, highlighting natural behaviors in active inference that are generally engineered in reinforcement learning, and an explicit discrete state comparison between active inference and reinforcement learning on an OpenAI gym baseline.
Abstract: Active inference is a first principle account of how autonomous agents operate in dynamic, nonstationary environments. This problem is also considered in reinforcement learning, but limited work exists on comparing the two approaches on the same discrete-state environments. In this letter, we provide (1) an accessible overview of the discrete-state formulation of active inference, highlighting natural behaviors in active inference that are generally engineered in reinforcement learning, and (2) an explicit discrete-state comparison between active inference and reinforcement learning on an OpenAI gym baseline. We begin by providing a condensed overview of the active inference literature, in particular viewing the various natural behaviors of active inference agents through the lens of reinforcement learning. We show that by operating in a pure belief-based setting, active inference agents can carry out epistemic exploration-and account for uncertainty about their environment-in a Bayes-optimal fashion. Furthermore, we show that the reliance on an explicit reward signal in reinforcement learning is removed in active inference, where reward can simply be treated as another observation we have a preference over; even in the total absence of rewards, agent behaviors are learned through preference learning. We make these properties explicit by showing two scenarios in which active inference agents can infer behaviors in reward-free environments compared to both Q-learning and Bayesian model-based reinforcement learning agents and by placing zero prior preferences over rewards and learning the prior preferences over the observations corresponding to reward. We conclude by noting that this formalism can be applied to more complex settings (e.g., robotic arm movement, Atari games) if appropriate generative models can be formulated. In short, we aim to demystify the behavior of active inference agents by presenting an accessible discrete state-space and time formulation and demonstrate these behaviors in a OpenAI gym environment, alongside reinforcement learning agents.

62 citations


Journal ArticleDOI
TL;DR: The proposed Bayesian SVR models provide point-wise probabilistic prediction while keeps the structural risk minimization principle, and it allows us to determine the optimal hyper-parameters by maximizing Bayesian model evidence.

54 citations


Journal ArticleDOI
TL;DR: Results show the new PRGP model can outperform the previous compatible methods, such as calibrated pure physical models and pure machine learning methods, in estimation precision and input robustness.
Abstract: Despite the wide implementation of machine learning (ML) technique in traffic flow modeling recently, those data-driven approaches often fall short of accuracy in the cases with a small or noisy training dataset. To address this issue, this study presents a new modeling framework, named physics regularized machine learning (PRML), to encode classical traffic flow models (referred as physics models) into the ML architecture and to regularize the ML training process. More specifically, leveraging the Gaussian process (GP) as the base model, a stochastic physics regularized Gaussian process (PRGP) model is developed and a Bayesian inference algorithm is used to estimate the mean and kernel of the PRGP. A physics regularizer, based on macroscopic traffic flow models, is also developed to augment the estimation via a shadow GP and an enhanced latent force model is used to encode physical knowledge into the stochastic process. Based on the posterior regularization inference framework, an efficient stochastic optimization algorithm is then developed to maximize the evidence lowerbound of the system likelihood. For model evaluations, this paper conducts empirical studies on a real-world dataset which is collected from a stretch of I-15 freeway, Utah. Results show the new PRGP model can outperform the previous compatible methods, such as calibrated traffic flow models and pure machine learning methods, in estimation precision and is more robust to the noisy training dataset.

53 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed to account for intrinsic uncertainty through a heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference, and integrate the two to quantify predictive uncertainty over the output image.

52 citations


Journal ArticleDOI
TL;DR: The Posterior Predictive P-value (PPP) method in the presence of missing data, the Bayesian adaptation of the approximate fit indices RMSEA, CFI and TLI, as well as the Wald test for nested models are discussed.
Abstract: In this article, we discuss the Posterior Predictive P-value (PPP) method in the presence of missing data, the Bayesian adaptation of the approximate fit indices RMSEA, CFI and TLI, as well as the ...

Journal ArticleDOI
TL;DR: The EKS methodology provides a cheap solution to the design problem of where to place points in parameter space to efficiently train an emulator of the parameter-to-data map for the purposes of Bayesian inversion.

Journal ArticleDOI
TL;DR: In this article, a machine learning-based groundwater ensemble modeling framework was proposed to predict groundwater storage change and improve overall model predicting reliability. But, the model is not able to produce consistently reliable predictions across the basin.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the framework of model averaging from the perspective of Bayesian statistics, and give useful formulas and approximations for the particular case of least-squares fitting, commonly used in modeling lattice results.
Abstract: Statistical modeling is a key component in the extraction of physical results from lattice field theory calculations. Although the general models used are often strongly motivated by physics, many model variations can frequently be considered for the same lattice data. Model averaging, which amounts to a probability-weighted average over all model variations, can incorporate systematic errors associated with model choice without being overly conservative. We discuss the framework of model averaging from the perspective of Bayesian statistics, and give useful formulas and approximations for the particular case of least-squares fitting, commonly used in modeling lattice results. In addition, we frame the common problem of data subset selection (e.g., choice of minimum and maximum time separation for fitting a two-point correlation function), as a model selection problem and study model averaging as a straightforward alternative to manual selection of fit ranges. Numerical examples involving both mock and real lattice data are given.

Journal ArticleDOI
TL;DR: In this article, the first simulations of Bayesian inference for the parameters of massive black hole systems including consistently the merger and ringdown of the signal, as well as higher harmonics were presented.
Abstract: The space-based gravitational wave detector LISA will observe mergers of massive black hole binary systems (MBHBs) to cosmological distances, as well as inspiralling stellar-origin (or stellar-mass) binaries (SBHBs) years before they enter the LIGO/Virgo band. Much remains to be explored for the parameter recovery of both classes of systems. Previous MBHB analyses relied on inspiral-only signals and/or a simplified Fisher matrix analysis, while SBHBs have not yet been extensively analyzed with Bayesian methods. We accelerate likelihood computations by (i) using a Fourier-domain response of the LISA instrument, (ii) using a reduced order model for nonspinning waveforms that include a merger-ringdown and higher harmonics, and (iii) setting the noise realization to zero and computing overlaps in the amplitude/phase representation. We present the first simulations of Bayesian inference for the parameters of massive black hole systems including consistently the merger and ringdown of the signal, as well as higher harmonics. We clarify the roles of LISA response time and frequency dependencies in breaking degeneracies and illustrate how degeneracy breaking unfolds over time. We also find that restricting the merger-dominated signal to its dominant harmonic can make the extrinsic likelihood very degenerate. Including higher harmonics proves to be crucial to breaking degeneracies and considerably improves the localization of the source, with a surviving bimodality in the sky position. We also present simulations of Bayesian inference for the extrinsic parameters of SBHBs, and show that although unimodal, their posterior distributions can have non-Gaussian features.

Journal ArticleDOI
TL;DR: A novel federated probabilistic forecasting scheme of solar irradiation is proposed based on deep learning, variational Bayesian inference, and federated learning (FL), which achieves competitive forecasting performance on the basis of data privacy protection.
Abstract: The irradiation forecasting technology is important for the effective utilization of solar power. Existing irradiation forecasting methods have achieved excellent performance with a massive amount of data in a centralized way. However, concerns about privacy protection and data security, which may arise in the process of data collection and transmission from distributed points to the centralized server, pose challenges to current forecasting methods. In this article, a novel federated probabilistic forecasting scheme of solar irradiation is proposed based on deep learning, variational Bayesian inference, and federated learning (FL). In this scheme, the training data are stored and computed in local Internet of Things devices, only forecasting models are shared. Two real-world datasets from SolarGIS and National Solar Radiation Database, and one benchmark dataset of Folsom are used to verify the feasibility and performance of the federated-based scheme. Comprehensive case studies are conducted to analyze the performance of the proposed scheme in multihorizon. And the effects of using meteorological features and variational Bayesian inference are evaluated. Compared with other state-of-the-art probabilistic centralized models, when data can be shared, the proposed scheme achieves competitive forecasting performance on the basis of data privacy protection. When data sharing is unavailable, due to the cooperative nature inherent (model-sharing) of FL, the performance advantage of the proposed scheme is more obvious.

Journal ArticleDOI
TL;DR: In this article, the authors cast synaptic plasticity as a problem of Bayesian inference, and thus provide a normative view of learning, and propose two hypotheses to explain the large variability in the size of postsynaptic potentials.
Abstract: Learning, especially rapid learning, is critical for survival. However, learning is hard; a large number of synaptic weights must be set based on noisy, often ambiguous, sensory information. In such a high-noise regime, keeping track of probability distributions over weights is the optimal strategy. Here we hypothesize that synapses take that strategy; in essence, when they estimate weights, they include error bars. They then use that uncertainty to adjust their learning rates, with more uncertain weights having higher learning rates. We also make a second, independent, hypothesis: synapses communicate their uncertainty by linking it to variability in postsynaptic potential size, with more uncertainty leading to more variability. These two hypotheses cast synaptic plasticity as a problem of Bayesian inference, and thus provide a normative view of learning. They generalize known learning rules, offer an explanation for the large variability in the size of postsynaptic potentials and make falsifiable experimental predictions.

Journal ArticleDOI
TL;DR: This tutorial paper reviews the use of advanced Monte Carlo sampling methods in the context of Bayesian model updating for engineering applications and considers the recorded data set as a single piece of information which is used to make inferences and estimations on time-invariant model parameters.

Journal ArticleDOI
TL;DR: Sanity as mentioned in this paper estimates expression values and associated error bars directly from raw unique molecular identifier (UMI) counts without any tunable parameters, and shows that Sanity outperforms other normalization methods on downstream tasks, such as finding nearest-neighbor cells and clustering cells into subtypes.
Abstract: Despite substantial progress in single-cell RNA-seq (scRNA-seq) data analysis methods, there is still little agreement on how to best normalize such data. Starting from the basic requirements that inferred expression states should correct for both biological and measurement sampling noise and that changes in expression should be measured in terms of fold changes, we here derive a Bayesian normalization procedure called Sanity (SAmpling-Noise-corrected Inference of Transcription activitY) from first principles. Sanity estimates expression values and associated error bars directly from raw unique molecular identifier (UMI) counts without any tunable parameters. Using simulated and real scRNA-seq datasets, we show that Sanity outperforms other normalization methods on downstream tasks, such as finding nearest-neighbor cells and clustering cells into subtypes. Moreover, we show that by systematically overestimating the expression variability of genes with low expression and by introducing spurious correlations through mapping the data to a lower-dimensional representation, other methods yield severely distorted pictures of the data.

Journal ArticleDOI
TL;DR: In this article, a variational inference model is proposed to approximate the joint distribution with a factorized distribution, and the solution takes the form of a closed-form expectation maximization procedure, which is viewed as the problem of maximizing the posterior joint distribution of a set of continuous and discrete latent variables given the past and current observations.
Abstract: In this article, we address the problem of tracking multiple speakers via the fusion of visual and auditory information. We propose to exploit the complementary nature and roles of these two modalities in order to accurately estimate smooth trajectories of the tracked persons, to deal with the partial or total absence of one of the modalities over short periods of time, and to estimate the acoustic status–either speaking or silent–of each tracked person over time. We propose to cast the problem at hand into a generative audio-visual fusion (or association) model formulated as a latent-variable temporal graphical model. This may well be viewed as the problem of maximizing the posterior joint distribution of a set of continuous and discrete latent variables given the past and current observations, which is intractable. We propose a variational inference model which amounts to approximate the joint distribution with a factorized distribution. The solution takes the form of a closed-form expectation maximization procedure. We describe in detail the inference algorithm, we evaluate its performance and we compare it with several baseline methods. These experiments show that the proposed audio-visual tracker performs well in informal meetings involving a time-varying number of people.

Journal ArticleDOI
TL;DR: A particular class of scalable Monte Carlo algorithms, stochastic gradient Markov chain Monte Carlo (SGMCMC) which utilizes data subsampling techniques to reduce the per-iteration cost of MCMC is presented.
Abstract: Markov chain Monte Carlo (MCMC) algorithms are generally regarded as the gold standard technique for Bayesian inference. They are theoretically well-understood and conceptually simple to apply in p...

Journal ArticleDOI
TL;DR: HIPPYlib as discussed by the authors is an extensible software framework for solving large-scale deterministic and Bayesian inverse problems governed by partial differential equations (PDEs) with (possibly) infinite-dimensional parameter fields.
Abstract: We present an extensible software framework, hIPPYlib, for solution of large-scale deterministic and Bayesian inverse problems governed by partial differential equations (PDEs) with (possibly) infinite-dimensional parameter fields (which are high-dimensional after discretization). hIPPYlib overcomes the prohibitively expensive nature of Bayesian inversion for this class of problems by implementing state-of-the-art scalable algorithms for PDE-based inverse problems that exploit the structure of the underlying operators, notably the Hessian of the log-posterior. The key property of the algorithms implemented in hIPPYlib is that the solution of the inverse problem is computed at a cost, measured in linearized forward PDE solves, that is independent of the parameter dimension. The mean of the posterior is approximated by the MAP point, which is found by minimizing the negative log-posterior with an inexact matrix-free Newton-CG method. The posterior covariance is approximated by the inverse of the Hessian of the negative log posterior evaluated at the MAP point. The construction of the posterior covariance is made tractable by invoking a low-rank approximation of the Hessian of the log-likelihood. Scalable tools for sample generation are also discussed. hIPPYlib makes all of these advanced algorithms easily accessible to domain scientists and provides an environment that expedites the development of new algorithms.

Journal ArticleDOI
TL;DR: Bayesian models provide recursive inference naturally because they can formally reconcile new data and existing scientific information as discussed by the authors, however, popular use of Bayesian methods often avoids priors and thus avoids the need for prior knowledge.
Abstract: Bayesian models provide recursive inference naturally because they can formally reconcile new data and existing scientific information. However, popular use of Bayesian methods often avoids priors ...

Journal ArticleDOI
TL;DR: It is shown for the first time that artificial neural networks can promptly detect and characterize binary neutron star gravitational-wave signals in real LIGO data, and distinguish them from noise and signals from coalescing black-hole binaries.

Journal ArticleDOI
TL;DR: Results demonstrate that the posterior probability distributions of the unknown structural parameters can be successfully identified, and reliable probabilistic model updating and damage identification can be achieved.

Journal ArticleDOI
TL;DR: It will be shown that the modeling choice of kernel density functions plays perhaps the most impactful roles in determining the posterior contraction rates in the misspecified situations.
Abstract: We study posterior contraction behaviors for parameters of interest in the context of Bayesian mixture modeling, where the number of mixing components is unknown while the model itself may or may not be correctly specified. Two representative types of prior specification will be considered: one requires explicitly a prior distribution on the number of mixture components, while the other places a nonparametric prior on the space of mixing distributions. The former is shown to yield an optimal rate of posterior contraction on the model parameters under minimal conditions, while the latter can be utilized to consistently recover the unknown number of mixture components, with the help of a fast probabilistic post-processing procedure. We then turn the study of these Bayesian procedures to the realistic settings of model misspecification. It will be shown that the modeling choice of kernel density functions plays perhaps the most impactful roles in determining the posterior contraction rates in the misspecified situations. Drawing on concrete posterior contraction rates established in this paper we wish to highlight some aspects about the interesting tradeoffs between model expressiveness and interpretability that a statistical modeler must negotiate in the rich world of mixture modeling.

Journal ArticleDOI
TL;DR: Methods for Bayesian learning of neural networks that allow consideration of both aleatoric uncertainties that account for the inherent stochasticity of the data-generating process, and epistemic uncertainties, which arise from consideration of limited amounts of data are presented.

Journal ArticleDOI
TL;DR: A double-layer distributed monitoring approach based on multiblockSlow feature analysis and independent component analysis monitoring models are generated for the dynamic and static subblocks.
Abstract: Due to the complex static, dynamic, and large-scale characteristics for modern industrial processes, in this article, we propose a double-layer distributed monitoring approach based on multiblock slow feature analysis and multiblock independent component analysis. To this end, the processed dataset is divided into the static and dynamic blocks on the basis of the sequential information of each variable in the first layer. Considering the correlations between the variables in the large-scale processes, the sequential correlation matrices in two blocks are calculated, which serves as the second-layer block division rule. Then, the static and dynamic blocks are further divided into several static and dynamic subblocks in which the variables in each subblock are strongly correlated and in the same state. The slow feature analysis and independent component analysis monitoring models are, respectively, generated for the dynamic and static subblocks. Finally, the monitoring results in each subblock are integrated by Bayesian inference to get the final statistics. The average fault detection rate of the proposed method for the Tennessee Eastman process is 0.842, while those of the other traditional methods are lower than 0.75, which shows the advantages of the proposed method.

Journal ArticleDOI
TL;DR: Experimental results demonstrate the effectiveness of the proposed fully Bayesian treatment of robust tensor factorization in multi-rank determination as well as its superiority in image denoising and background modeling over state-of-the-art approaches.
Abstract: Robust tensor factorization is a fundamental problem in machine learning and computer vision, which aims at decomposing tensors into low-rank and sparse components. However, existing methods either suffer from limited modeling power in preserving low-rank structures, or have difficulties in determining the target tensor rank and the trade-off between the low-rank and sparse components. To address these problems, we propose a fully Bayesian treatment of robust tensor factorization along with a generalized sparsity-inducing prior. By adapting the recently proposed low-tubal-rank model in a generative manner, our method is effective in preserving low-rank structures. Moreover, benefiting from the proposed prior and the Bayesian framework, the proposed method can automatically determine the tensor rank while inferring the trade-off between the low-rank and sparse components. For model estimation, we develop a variational inference algorithm, and further improve its efficiency by reformulating the variational updates in the frequency domain. Experimental results on both synthetic and real-world datasets demonstrate the effectiveness of the proposed method in multi-rank determination as well as its superiority in image denoising and background modeling over state-of-the-art approaches.

Journal ArticleDOI
TL;DR: A probabilistic calibration method is proposed for evaluating the accuracy and applicability of available deterministic models for the mechanical performances of RAC based on the Bayesian theory and the Markov Chain Monte Carlo method.