scispace - formally typeset
Search or ask a question

Showing papers on "Bayesian inference published in 2007"


Journal ArticleDOI
TL;DR: The incremental algorithm is compared experimentally to an earlier batch Bayesian algorithm, as well as to one based on maximum-likelihood, which have comparable classification performance on small training sets, but incremental learning is significantly faster, making real-time learning feasible.

2,597 citations


Journal ArticleDOI
TL;DR: This paper developed a dynamic stochastic general equilibrium (DSGE) model for an open economy, and estimate it on Euro area data using Bayesian estimation techniques, incorporating several open economy features, as well as a number of nominal and real frictions that have proven important for the empirical fit of closed economy models.

958 citations


Journal ArticleDOI
TL;DR: It is shown how the ReML objective function can be adjusted to provide an approximation to the log-evidence for a particular model, which means ReML can be used for model selection, specifically to select or compare models with different covariance components.

843 citations


Journal ArticleDOI
01 Apr 2007
TL;DR: The authors present a Bayesian framework for understanding how adults and children learn the meanings of words, and explains how learners can generalize meaningfully from just one or a few positive examples of a novel word's referents.
Abstract: The authors present a Bayesian framework for understanding how adults and children learn the meanings of words. The theory explains how learners can generalize meaningfully from just one or a few positive examples of a novel word’s referents, by making rational inductive inferences that integrate prior knowledge about plausible word meanings with the statistical structure of the observed examples. The theory addresses shortcomings of the two best known approaches to modeling word learning, based on deductive hypothesis elimination and associative learning. Three experiments with adults and children test the Bayesian account’s predictions in the context of learning words for object categories at multiple levels of a taxonomic hierarchy. Results provide strong support for the Bayesian account over competing accounts, in terms of both quantitative model fits and the ability to explain important qualitative phenomena. Several extensions of the basic theory are discussed, illustrating the broader potential for Bayesian models of word learning.

738 citations


Journal ArticleDOI
TL;DR: The Deviance Information Criterion as mentioned in this paper combines ideas from both heritages; it is readily computed from Monte Carlo posterior samples and, unlike the AIC and BIC, allows for parameter degeneracy.
Abstract: Model selection is the problem of distinguishing competing models, perhaps featuring different numbers of parameters. The statistics literature contains two distinct sets of tools, those based on information theory such as the Akaike Information Criterion (AIC), and those on Bayesian inference such as the Bayesian evidence and Bayesian Information Criterion (BIC). The Deviance Information Criterion combines ideas from both heritages; it is readily computed from Monte Carlo posterior samples and, unlike the AIC and BIC, allows for parameter degeneracy. I describe the properties of the information criteria, and as an example compute them from Wilkinson Microwave Anisotropy Probe 3-yr data for several cosmological models. I find that at present the information theory and Bayesian approaches give significantly different conclusions from that data.

725 citations


Posted Content
TL;DR: The winner of the 2004 DeGroot Prize, the authors, is a graduate-level textbook that introduces Bayesian statistics and decision theory, covering both the basic ideas of statistical theory, and also some of the more modern and advanced topics of bayesian statistics such as complete class theorems, the Stein effect, Bayesian model choice, hierarchical and empirical Bayes modeling, Monte Carlo integration including Gibbs sampling, and other MCMC techniques.
Abstract: Winner of the 2004 DeGroot Prize This paperback edition, a reprint of the 2001 edition, is a graduate-level textbook that introduces Bayesian statistics and decision theory. It covers both the basic ideas of statistical theory, and also some of the more modern and advanced topics of Bayesian statistics such as complete class theorems, the Stein effect, Bayesian model choice, hierarchical and empirical Bayes modeling, Monte Carlo integration including Gibbs sampling, and other MCMC techniques. It was awarded the 2004 DeGroot Prize by the International Society for Bayesian Analysis (ISBA) for setting "a new standard for modern textbooks dealing with Bayesian methods, especially those using MCMC techniques, and that it is a worthy successor to DeGroot's and Berger's earlier texts".

630 citations


Journal ArticleDOI
TL;DR: This paper presents a newly developed simulation-based approach for Bayesian model updating, model class selection, and model averaging called the transitional Markov chain Monte Carlo (TMCMC) approach, motivated by the adaptive Metropolis–Hastings method.
Abstract: This paper presents a newly developed simulation-based approach for Bayesian model updating, model class selection, and model averaging called the transitional Markov chain Monte Carlo (TMCMC) approach. The idea behind TMCMC is to avoid the problem of sampling from difficult target probability density functions (PDFs) but sampling from a series of intermediate PDFs that converge to the target PDF and are easier to sample. The TMCMC approach is motivated by the adaptive Metropolis–Hastings method developed by Beck and Au in 2002 and is based on Markov chain Monte Carlo. It is shown that TMCMC is able to draw samples from some difficult PDFs (e.g., multimodal PDFs, very peaked PDFs, and PDFs with flat manifold). The TMCMC approach can also estimate evidence of the chosen probabilistic model class conditioning on the measured data, a key component for Bayesian model class selection and model averaging. Three examples are used to demonstrate the effectiveness of the TMCMC approach in Bayesian model updating, ...

616 citations


Journal ArticleDOI
TL;DR: The Integrated Bayesian Uncertainty Estimator (IBUNE) as mentioned in this paper is a new framework to account for the major uncertainties of hydrologic rainfall-runoff predictions explicitly.
Abstract: [1] The conventional treatment of uncertainty in rainfall-runoff modeling primarily attributes uncertainty in the input-output representation of the model to uncertainty in the model parameters without explicitly addressing the input, output, and model structural uncertainties. This paper presents a new framework, the Integrated Bayesian Uncertainty Estimator (IBUNE), to account for the major uncertainties of hydrologic rainfall-runoff predictions explicitly. IBUNE distinguishes between the various sources of uncertainty including parameter, input, and model structural uncertainty. An input error model in the form of a Gaussian multiplier has been introduced within IBUNE. These multipliers are assumed to be drawn from an identical distribution with an unknown mean and variance which were estimated along with other hydrological model parameters by a Monte Carlo Markov Chain (MCMC) scheme. IBUNE also includes the Bayesian model averaging (BMA) scheme which is employed to further improve the prediction skill and address model structural uncertainty using multiple model outputs. A series of case studies using three rainfall-runoff models to predict the streamflow in the Leaf River basin, Mississippi, are used to examine the necessity and usefulness of this technique. The results suggest that ignoring either input forcings error or model structural uncertainty will lead to unrealistic model simulations and incorrect uncertainty bounds.

537 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the proposed 'bayesian epistasis association mapping' method is significantly more powerful than existing approaches and that genome-wide case-control epistasis mapping with many thousands of markers is both computationally and statistically feasible.
Abstract: Epistatic interactions among multiple genetic variants in the human genome may be important in determining individual susceptibility to common diseases. Although some existing computational methods for identifying genetic interactions have been effective for small-scale studies, we here propose a method, denoted 'bayesian epistasis association mapping' (BEAM), for genome-wide case-control studies. BEAM treats the disease-associated markers and their interactions via a bayesian partitioning model and computes, via Markov chain Monte Carlo, the posterior probability that each marker set is associated with the disease. Testing this on an age-related macular degeneration genome-wide association data set, we demonstrate that the method is significantly more powerful than existing approaches and that genome-wide case-control epistasis mapping with many thousands of markers is both computationally and statistically feasible.

503 citations


Journal ArticleDOI
01 Dec 2007-Synthese
TL;DR: It is suggested that these perceptual processes are just one emergent property of systems that conform to a free-energy principle, and that the system’s state and structure encode an implicit and probabilistic model of the environment.
Abstract: If one formulates Helmholtz's ideas about perception in terms of modern-day theories one arrives at a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts. Using constructs from statistical physics it can be shown that the problems of inferring what cause our sensory input and learning causal regularities in the sensorium can be resolved using exactly the same principles. Furthermore, inference and learning can proceed in a biologically plausible fashion. The ensuing scheme rests on Empirical Bayes and hierarchical models of how sensory information is generated. The use of hierarchical models enables the brain to construct prior expectations in a dynamic and context-sensitive fashion. This scheme provides a principled way to understand many aspects of the brain's organisation and responses.In this paper, we suggest that these perceptual processes are just one emergent property of systems that conform to a free-energy principle. The free-energy considered here represents a bound on the surprise inherent in any exchange with the environment, under expectations encoded by its state or configuration. A system can minimise free-energy by changing its configuration to change the way it samples the environment, or to change its expectations. These changes correspond to action and perception respectively and lead to an adaptive exchange with the environment that is characteristic of biological systems. This treatment implies that the system's state and structure encode an implicit and probabilistic model of the environment. We will look at models entailed by the brain and how minimisation of free-energy can explain its dynamics and structure.

498 citations


Book
30 Jul 2007
TL;DR: The introduction to Bayesian statistics as mentioned in this paper presents Bayes theorem, the estimation of unknown parameters, the determination of confidence regions and the derivation of tests of hypotheses for the unknown parameters in a manner that is simple, intuitive and easy to comprehend.
Abstract: The Introduction to Bayesian Statistics (2nd Edition) presents Bayes theorem, the estimation of unknown parameters, the determination of confidence regions and the derivation of tests of hypotheses for the unknown parameters, in a manner that is simple, intuitive and easy to comprehend. The methods are applied to linear models, in models for a robust estimation, for prediction and filtering and in models for estimating variance components and covariance components. Regularization of inverse problems and pattern recognition are also covered while Bayesian networks serve for reaching decisions in systems with uncertainties. If analytical solutions cannot be derived, numerical algorithms are presented such as the Monte Carlo integration and Markov Chain Monte Carlo methods.

Journal ArticleDOI
TL;DR: This work presents a reformulation of the Bayesian approach to inverse problems, that seeks to accelerate Bayesian inference by using polynomial chaos expansions to represent random variables, and evaluates the utility of this technique on a transient diffusion problem arising in contaminant source inversion.

Journal ArticleDOI
TL;DR: DMA over a large model space led to better predictions than the single best performing physically motivated model, and it recovered both constant and time-varying regression parameters and model specifications quite well.
Abstract: We consider the problem of online prediction when it is uncertain what the best prediction model to use is. We develop a method called Dynamic Model Averaging (DMA) in which a state space model for the parameters of each model is combined with a Markov chain model for the correct model. This allows the "correct" model to vary over time. The state space and Markov chain models are both specified in terms of forgetting, leading to a highly parsimonious representation. As a special case, when the model and parameters do not change, DMA is a recursive implementation of standard Bayesian model averaging, which we call recursive model averaging. The method is applied to the problem of predicting the output strip thickness for a cold rolling mill, where the output is measured with a time delay. We found that when only a small number of physically motivated models were considered and one was clearly best, the method quickly converged to the best model, and the cost of model uncertainty was small; indeed DMA performed slightly better than the best physical model. When model uncertainty and the number of models considered were large, our method ensured that the penalty for model uncertainty was small. At the beginning of the process, when control is most difficult, we found that DMA over a large model space led to better predictions than the single best performing physically motivated model. We also applied the method to several simulated examples, and found that it recovered both constant and time-varying regression parameters and model specifications quite well.

Journal ArticleDOI
TL;DR: In this article, the predictive probability density functions (PDFs) for weather quantities are represented as a weighted average of PDFs centered on the individual bias-corrected forecasts, where the weights are posterior probabilities of the models generating the forecasts and reflect the forecasts' relative contributions to predictive skill over a training period.
Abstract: Bayesian model averaging (BMA) is a statistical way of postprocessing forecast ensembles to create predictive probability density functions (PDFs) for weather quantities. It represents the predictive PDF as a weighted average of PDFs centered on the individual bias-corrected forecasts, where the weights are posterior probabilities of the models generating the forecasts and reflect the forecasts’ relative contributions to predictive skill over a training period. It was developed initially for quantities whose PDFs can be approximated by normal distributions, such as temperature and sea level pressure. BMA does not apply in its original form to precipitation, because the predictive PDF of precipitation is nonnormal in two major ways: it has a positive probability of being equal to zero, and it is skewed. In this study BMA is extended to probabilistic quantitative precipitation forecasting. The predictive PDF corresponding to one ensemble member is a mixture of a discrete component at zero and a gam...

Journal ArticleDOI
TL;DR: This paper examined the effect of a variety of prior assumptions on the inference concerning model size, posterior inclusion probabilities of regressors, and predictive performance in cross-country growth regressions using three datasets with 41 to 67 potential drivers of growth and 72 to 93 observations.
Abstract: This paper examines the problem of variable selection in linear regression models. Bayesian model averaging has become an important tool in empirical settings with large numbers of potential regressors and relatively limited numbers of observations. The paper analyzes the effect of a variety of prior assumptions on the inference concerning model size, posterior inclusion probabilities of regressors, and predictive performance. The analysis illustrates these issues in the context of cross-country growth regressions using three datasets with 41 to 67 potential drivers of growth and 72 to 93 observations. The results favor particular prior structures for use in this and related contexts.

Journal ArticleDOI
TL;DR: In this article, the authors review linkages to optimal interpolation, kriging, Kalman filtering, smoothing, and variational analysis for data assimilation in Bayesian statistics.

Journal ArticleDOI
TL;DR: Two microarray data sets as well as simulations are used to evaluate the methodology, the power diagnostics showing why nonnull cases might easily fail to appear on a list of "significant" discoveries are shown.
Abstract: Modern scientific technology has provided a new class of large-scale simultaneous inference problems, with thousands of hypothesis tests to consider at the same time. Microarrays epitomize this type of technology, but similar situations arise in proteomics, spectroscopy, imaging, and social science surveys. This paper uses false discovery rate methods to carry out both size and power calculations on large-scale problems. A simple empirical Bayes approach allows the false discovery rate (fdr) analysis to proceed with a minimum of frequentist or Bayesian modeling assumptions. Closed-form accuracy formulas are derived for estimated false discovery rates, and used to compare different methodologies: local or tail-area fdr's, theoretical, permutation, or empirical null hypothesis estimates. Two microarray data sets as well as simulations are used to evaluate the methodology, the power diagnostics showing why nonnull cases might easily fail to appear on a list of ``significant'' discoveries.

Journal ArticleDOI
TL;DR: In this paper, a simple empirical Bayes approach is used to carry out both size and power calculations on large-scale problems, and closed-form accuracy formulas are derived for estimated false discovery rates, and used to compare different methodologies: local or tail-area fdr, theoretical, permutation, or empirical null hypothesis estimates.
Abstract: Modern scientific technology has provided a new class of large-scale simultaneous inference problems, with thousands of hypothesis tests to consider at the same time. Microarrays epitomize this type of technology, but similar situations arise in proteomics, spectroscopy, imaging, and social science surveys. This paper uses false discovery rate methods to carry out both size and power calculations on large-scale problems. A simple empirical Bayes approach allows the false discovery rate (fdr) analysis to proceed with a minimum of frequentist or Bayesian modeling assumptions. Closed-form accuracy formulas are derived for estimated false discovery rates, and used to compare different methodologies: local or tail-area fdr’s, theoretical, permutation, or empirical null hypothesis estimates. Two microarray data sets as well as simulations are used to evaluate the methodology, the power diagnostics showing why nonnull cases might easily fail to appear on a list of “significant” discoveries.

Book
01 Jan 2007
TL;DR: In this article, the authors present a self-contained entry to computational Bayesian statistics, focusing on standard statistical models and backed up by discussed real datasets available from the book website.
Abstract: This Bayesian modeling book is intended for practitioners and applied statisticians looking for a self-contained entry to computational Bayesian statistics. Focusing on standard statistical models and backed up by discussed real datasets available from the book website, it provides an operational methodology for conducting Bayesian inference, rather than focusing on its theoretical justifications. Special attention is paid to the derivation of prior distributions in each case and specific reference solutions are given for each of the models. Similarly, computational details are worked out to lead the reader towards an effective programming of the methods given in the book.

01 Jan 2007
TL;DR: The approximation tool for latent GMRF models is introduced and the approximation for the posterior of the hyperparameters θ in equation (1) is shown to give extremely accurate results in a fraction of the computing time used by MCMC algorithms.
Abstract: This thesis consists of five papers, presented in chronological order. Their content is summarised in this section.Paper I introduces the approximation tool for latent GMRF models and discusses, in particular, the approximation for the posterior of the hyperparameters θ in equation (1). It is shown that this approximation is indeed very accurate, as even long MCMC runs cannot detect any error in it. A Gaussian approximation to the density of χi|θ, y is also discussed. This appears to give reasonable results and it is very fast to compute. However, slight errors are detected when comparing the approximation with long MCMC runs. These are mostly due to the fact that a possible - skewed density is approximated via a symmetric one. Paper I presents also some details about sparse matrices algorithms.The core of the thesis is presented in Paper II. Here most of the remaining issues present in Paper I are solved. Three different approximation for χi|θ, y with different degrees of accuracy and computational costs are described. Moreover, ways to assess the approximation error and considerations about the asymptotical behaviour of the approximations are also discussed. Through a series of examples covering a wide range of commonly used latent GMRF models, the approximations are shown to give extremely accurate results in a fraction of the computing time used by MCMC algorithms.Paper III applies the same ideas as Paper II to generalised linear mixed models where χ represents a latent variable at n spatial sites on a two dimensional domain. Out of these n sites k, with n >> k , are observed through data. The n sites are assumed to be on a regular grid and wrapped on a torus. For the class of models described in Paper III the computations are based on discrete Fourier transform instead of sparse matrices. Paper III illustrates also how marginal likelihood π (y) can be approximated, provides approximate strategies for Bayesian outlier detection and perform approximate evaluation of spatial experimental design.Paper IV presents yet another application of the ideas in Paper II. Here approximate techniques are used to do inference on multivariate stochastic volatility models, a class of models widely used in financial applications. Paper IV discusses also problems deriving from the increased dimension of the parameter vector θ, a condition which makes all numerical integration more computationally intensive. Different approximations for the posterior marginals of the parameters θ, π(θi)|y), are also introduced. Approximations to the marginal likelihood π(y) are used in order to perform model comparison.Finally, Paper V is a manual for a program, named inla which implements all approximations described in Paper II. A large series of worked out examples, covering many well known models, illustrate the use and the performance of the inla program. This program is a valuable instrument since it makes most of the Bayesian inference techniques described in this thesis easily available for everyone.

Journal ArticleDOI
TL;DR: Investigation of the effect of improper data partitioning on phylogenetic accuracy, as well as the type I error rate and sensitivity of Bayes factors, a commonly used method for choosing among different partitioning strategies in Bayesian analyses, suggest that model partitioning is important for large data sets.
Abstract: As larger, more complex data sets are being used to infer phylogenies, accuracy of these phylogenies increasingly requires models of evolution that accommodate heterogeneity in the processes of molecular evolution. We investigated the effect of improper data partitioning on phylogenetic accuracy, as well as the type I error rate and sensitivity of Bayes factors, a commonly used method for choosing among different partitioning strategies in Bayesian analyses. We also used Bayes factors to test empirical data for the need to divide data in a manner that has no expected biological meaning. Posterior probability estimates are misleading when an incorrect partitioning strategy is assumed. The error was greatest when the assumed model was underpartitioned. These results suggest that model partitioning is important for large data sets. Bayes factors performed well, giving a 5% type I error rate, which is remarkably consistent with standard frequentist hypothesis tests. The sensitivity of Bayes factors was found to be quite high when the across-class model heterogeneity reflected that of empirical data. These results suggest that Bayes factors represent a robust method of choosing among partitioning strategies. Lastly, results of tests for the inclusion of unexpected divisions in empirical data mirrored the simulation results, although the outcome of such tests is highly dependent on accounting for rate variation among classes. We conclude by discussing other approaches for partitioning data, as well as other applications of Bayes factors.

Journal ArticleDOI
TL;DR: In this article, the current state of spatial point process theory and directions for future research are summarized and discussed, making an analogy with generalized linear models and random effect models, and illustrating the theory with various examples of applications.
Abstract: . We summarize and discuss the current state of spatial point process theory and directions for future research, making an analogy with generalized linear models and random effect models, and illustrating the theory with various examples of applications. In particular, we consider Poisson, Gibbs and Cox process models, diagnostic tools and model checking, Markov chain Monte Carlo algorithms, computational methods for likelihood-based inference, and quick non-likelihood approaches to inference.

Journal ArticleDOI
TL;DR: In this article, the Savage-Dickey density ratio (SDR) is used to determine the Bayes factor of two nested models and hence perform model selection, based on which a non-scale invariant spectral index of perturbations is favored for any sensible choice of prior.
Abstract: Bayesian model selection is a tool to decide whether the introduction of a new parameter is warranted by data. I argue that the usual sampling statistic significance tests for a null hypothesis can be misleading, since they do not take into account the information gained through the data, when updating the prior distribution to the posterior. On the contrary, Bayesian model selection offers a quantitative implementation of Occam’s razor. I introduce the Savage–Dickey density ratio, a computationally quick method to determine the Bayes factor of two nested models and hence perform model selection. As an illustration, I consider three key parameters for our understanding of the cosmological concordance model. By using WMAP 3–year data complemented by other cosmological measurements, I show that a non–scale invariant spectral index of perturbations is favoured for any sensible choice of prior. It is also found that a flat Universe is favoured with odds of 29 : 1 over non–flat models, and that there is strong evidence against a CDM isocurvature component to the initial conditions which is totally (anti)correlated with the adiabatic mode (odds of about 2000 : 1), but that this is strongly dependent on the prior adopted. These results are contrasted with the analysis of WMAP 1–year data, which were not informative enough to allow a conclusion as to the status of the spectral index. In a companion paper, a new technique to forecast the Bayes factor of a future observation is presented.

Journal ArticleDOI
TL;DR: The present study compares the performance and applicability of the EnKF and BMA for probabilistic ensemble streamflow forecasting, an application for which a robust comparison of the predictive skills of these approaches can be conducted and suggests that for the watershed under consideration, BMA cannot achieve a performance matching that of theEnKF method.
Abstract: [1] Predictive uncertainty analysis in hydrologic modeling has become an active area of research, the goal being to generate meaningful error bounds on model predictions. State-space filtering methods, such as the ensemble Kalman filter (EnKF), have shown the most flexibility to integrate all sources of uncertainty. However, predictive uncertainty analyses are typically carried out using a single conceptual mathematical model of the hydrologic system, rejecting a priori valid alternative plausible models and possibly underestimating uncertainty in the model itself. Methods based on Bayesian model averaging (BMA) have also been proposed in the statistical and meteorological literature as a means to account explicitly for conceptual model uncertainty. The present study compares the performance and applicability of the EnKF and BMA for probabilistic ensemble streamflow forecasting, an application for which a robust comparison of the predictive skills of these approaches can be conducted. The results suggest that for the watershed under consideration, BMA cannot achieve a performance matching that of the EnKF method.


01 Jan 2007
TL;DR: Low-nitrate plutonia sols having a NO3/Pu mole ratio in the range 0.1 to 0.4 with an average crystallite diameter of 30 to 80 A can be produced when a sol is prepared by solvent extraction of a plutonium nitrate seeded with a plutonian sol.
Abstract: Low-nitrate plutonia sols having a NO3/Pu mole ratio in the range 0.1 to 0.4 with an average crystallite diameter of 30 to 80 A can be produced when a sol is prepared by solvent extraction of a plutonium nitrate seeded with a plutonia sol. When the seeded sol is taken to dryness and heated for 10 to 120 minutes at a temperature in the range 180 DEG -230 DEG C. in a dry sweep gas, nitrate removal occurs and the baked solid can easily be dispersed to form a stable sol.

Journal ArticleDOI
TL;DR: These findings quantify to what extent the inclusion of independent prior knowledge improves the network reconstruction accuracy, and the values of the hyperparameters inferred with the proposed scheme were found to be close to optimal with respect to minimizing the reconstruction error.
Abstract: There have been various attempts to reconstruct gene regulatory networks from microarray expression data in the past. However, owing to the limited amount of independent experimental conditions and noise inherent in the measurements, the results have been rather modest so far. For this reason it seems advisable to include biological prior knowledge, related, for instance, to transcription factor binding locations in promoter regions or partially known signalling pathways from the literature. In the present paper, we consider a Bayesian approach to systematically integrate expression data with multiple sources of prior knowledge. Each source is encoded via a separate energy function, from which a prior distribution over network structures in the form of a Gibbs distribution is constructed. The hyperparameters associated with the different sources of prior knowledge, which measure the influence of the respective prior relative to the data, are sampled from the posterior distribution with MCMC. We have evaluated the proposed scheme on the yeast cell cycle and the Raf signalling pathway. Our findings quantify to what extent the inclusion of independent prior knowledge improves the network reconstruction accuracy, and the values of the hyperparameters inferred with the proposed scheme were found to be close to optimal with respect to minimizing the reconstruction error.

Journal ArticleDOI
TL;DR: This work suggests that the most likely cause of an observed movement can be inferred by minimizing the prediction error at all cortical levels that are engaged during movement observation by using a statistical approach known as empirical Bayesian inference.
Abstract: Is it possible to understand the intentions of other people by simply observing their movements? Many neuroscientists believe that this ability depends on the brain's mirror-neuron system, which provides a direct link between action and observation. Precisely how intentions can be inferred through movement-observation, however, has provoked much debate. One problem in inferring the cause of an observed action, is that the problem is ill-posed because identical movements can be made when performing different actions with different goals. Here we suggest that this problem is solved by the mirror-neuron system using predictive coding on the basis of a statistical approach known as empirical Bayesian inference. This means that the most likely cause of an observed movement can be inferred by minimizing the prediction error at all cortical levels that are engaged during movement observation. This account identifies a precise role for the mirror-neuron system in our ability to infer intentions from observed movement and outlines possible computational mechanisms.

Journal ArticleDOI
TL;DR: A Bayesian probabilistic inferential framework, which provides a natural means for incorporating both errors and prior information about the source, is presented and the inverse source determination method is validated against real data sets acquired in a highly disturbed flow field in an urban environment.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a semiparametric inference for Gaussian copula models via a type of rank likelihood function for the association parameters, which can be viewed as a generalization of marginal likelihood estimation.
Abstract: Quantitative studies in many fields involve the analysis of multivariate data of diverse types, including measurements that we may consider binary, ordinal and continuous. One approach to the analysis of such mixed data is to use a copula model, in which the associations among the variables are parameterized separately from their univariate marginal distributions. The purpose of this article is to provide a simple, general method of semiparametric inference for copula models via a type of rank likelihood function for the association parameters. The proposed method of inference can be viewed as a generalization of marginal likelihood estimation, in which inference for a parameter of interest is based on a summary statistic whose sampling distribution is not a function of any nuisance parameters. In the context of copula estimation, the extended rank likelihood is a function of the association parameters only and its applicability does not depend on any assumptions about the marginal distributions of the data, thus making it appropriate for the analysis of mixed continuous and discrete data with arbitrary marginal distributions. Estimation and inference for parameters of the Gaussian copula are available via a straightforward Markov chain Monte Carlo algorithm based on Gibbs sampling. Specification of prior distributions or a parametric form for the univariate marginal distributions of the data is not necessary.