scispace - formally typeset
Search or ask a question

Showing papers on "Bayesian probability published in 2004"


Journal ArticleDOI
TL;DR: A major challenge for neuroscientists is to test ideas for how this might be achieved in populations of neurons experimentally, and so determine whether and how neurons code information about sensory uncertainty.

2,067 citations


Journal ArticleDOI
TL;DR: A range of Bayesian hierarchical models using the Markov chain Monte Carlo software WinBUGS are presented that allow for variation in true treatment effects across trials, and models where the between-trials variance is homogeneous across treatment comparisons are considered.
Abstract: Mixed treatment comparison (MTC) meta-analysis is a generalization of standard pairwise meta-analysis for A vs B trials, to data structures that include, for example, A vs B, B vs C, and A vs C trials. There are two roles for MTC: one is to strengthen inference concerning the relative efficacy of two treatments, by including both 'direct' and 'indirect' comparisons. The other is to facilitate simultaneous inference regarding all treatments, in order for example to select the best treatment. In this paper, we present a range of Bayesian hierarchical models using the Markov chain Monte Carlo software WinBUGS. These are multivariate random effects models that allow for variation in true treatment effects across trials. We consider models where the between-trials variance is homogeneous across treatment comparisons as well as heterogeneous variance models. We also compare models with fixed (unconstrained) baseline study effects with models with random baselines drawn from a common distribution. These models are applied to an illustrative data set and posterior parameter distributions are compared. We discuss model critique and model selection, illustrating the role of Bayesian deviance analysis, and node-based model criticism. The assumptions underlying the MTC models and their parameterization are also discussed.

1,861 citations


Journal ArticleDOI
15 Jan 2004-Nature
TL;DR: This work shows that subjects internally represent both the statistical distribution of the task and their sensory uncertainty, combining them in a manner consistent with a performance-optimizing bayesian process.
Abstract: When we learn a new motor skill, such as playing an approaching tennis ball, both our sensors and the task possess variability. Our sensors provide imperfect information about the ball's velocity, so we can only estimate it. Combining information from multiple modalities can reduce the error in this estimate. On a longer time scale, not all velocities are a priori equally probable, and over the course of a match there will be a probability distribution of velocities. According to bayesian theory, an optimal estimate results from combining information about the distribution of velocities-the prior-with evidence from sensory feedback. As uncertainty increases, when playing in fog or at dusk, the system should increasingly rely on prior knowledge. To use a bayesian strategy, the brain would need to represent the prior distribution and the level of uncertainty in the sensory feedback. Here we control the statistical variations of a new sensorimotor task and manipulate the uncertainty of the sensory feedback. We show that subjects internally represent both the statistical distribution of the task and their sensory uncertainty, combining them in a manner consistent with a performance-optimizing bayesian process. The central nervous system therefore employs probabilistic models during sensorimotor learning.

1,811 citations


Journal ArticleDOI
TL;DR: A Bayesian MCMC approach to the analysis of combined data sets was developed and its utility in inferring relationships among gall wasps based on data from morphology and four genes was explored, supporting the utility of morphological data in multigene analyses.
Abstract: The recent development of Bayesian phylogenetic inference using Markov chain Monte Carlo (MCMC) techniques has facilitated the exploration of parameter-rich evolutionary models. At the same time, stochastic models have become more realistic (and complex) and have been extended to new types of data, such as morphology. Based on this foundation, we developed a Bayesian MCMC approach to the analysis of combined data sets and explored its utility in inferring relationships among gall wasps based on data from morphology and four genes (nuclear and mitochondrial, ribosomal and protein coding). Examined models range in complexity from those recognizing only a morphological and a molecular partition to those having complex substitution models with independent parameters for each gene. Bayesian MCMC analysis deals efficiently with complex models: convergence occurs faster and more predictably for complex models, mixing is adequate for all parameters even under very complex models, and the parameter update cycle is virtually unaffected by model partitioning across sites. Morphology contributed only 5% of the characters in the data set but nevertheless influenced the combined-data tree, supporting the utility of morphological data in multigene analyses. We used Bayesian criteria (Bayes factors) to show that process heterogeneity across data partitions is a significant model component, although not as important as among-site rate variation. More complex evolutionary models are associated with more topological uncertainty and less conflict between morphology and molecules. Bayes factors sometimes favor simpler models over considerably more parameter-rich models, but the best model overall is also the most complex and Bayes factors do not support exclusion of apparently weak parameters from this model. Thus, Bayes factors appear to be useful for selecting among complex models, but it is still unclear whether their use strikes a reasonable balance between model complexity and error in parameter estimates.

1,758 citations


01 Jan 2004
TL;DR: This work has consistently shown that there are large performance benefits to be gained by applying Sigma-Point Kalman filters to areas where EKFs have been used as the de facto standard in the past, as well as in new areas where the use of the EKF is impossible.
Abstract: Probabilistic inference is the problem of estimating the hidden variables (states or parameters) of a system in an optimal and consistent fashion as a set of noisy or incomplete observations of the system becomes available online. The optimal solution to this problem is given by the recursive Bayesian estimation algorithm which recursively updates the posterior density of the system state as new observations arrive. This posterior density constitutes the complete solution to the probabilistic inference problem, and allows us to calculate any “optimal” estimate of the state. Unfortunately, for most real-world problems, the optimal Bayesian recursion is intractable and approximate solutions must be used. Within the space of approximate solutions, the extended Kalman filter (EKF) has become one of the most widely used algorithms with applications in state, parameter and dual estimation. Unfortunately, the EKF is based on a sub-optimal implementation of the recursive Bayesian estimation framework applied to Gaussian random variables. This can seriously affect the accuracy or even lead to divergence of any inference system that is based on the EKF or that uses the EKF as a component part. Recently a number of related novel, more accurate and theoretically better motivated algorithmic alternatives to the EKF have surfaced in the literature, with specific application to state estimation for automatic control. We have extended these algorithms, all based on derivativeless deterministic sampling based approximations of the relevant Gaussian statistics, to a family of algorithms called Sigma-Point Kalman Filters (SPKF). Furthermore, we successfully expanded the use of this group of algorithms (SPKFs) within the general field of probabilistic inference and machine learning, both as stand-alone filters and as subcomponents of more powerful sequential Monte Carlo methods (particle filters). We have consistently shown that there are large performance benefits to be gained by applying Sigma-Point Kalman filters to areas where EKFs have been used as the de facto standard in the past, as well as in new areas where the use of the EKF is impossible.

1,116 citations


Journal ArticleDOI
TL;DR: The proposed parallel algorithm retains the ability to explore multiple peaks in the posterior distribution of trees while maintaining a fast execution time, and performance results indicate nearly linear speed improvement in both programming models for small and large data sets.
Abstract: Motivation: Bayesian estimation of phylogeny is based on the posterior probability distribution of trees. Currently, the only numerical method that can effectively approximate posterior probabilities of trees is Markov chain Monte Carlo (MCMC). Standard implementations of MCMC can be prone to entrapment in local optima. Metropolis coupled MCMC [(MC)3], a variant of MCMC, allows multiple peaks in the landscape of trees to be more readily explored, but at the cost of increased execution time. Results: This paper presents a parallel algorithm for (MC)3. The proposed parallel algorithm retains the ability to explore multiple peaks in the posterior distribution of trees while maintaining a fast execution time. The algorithm has been implemented using two popular parallel programming models: message passing and shared memory. Performance results indicate nearly linear speed improvement in both programming models for small and large data sets. Availability: MrBayes v3.0 is available at http://morphbank.ebc.uu.se/mrbayes/

1,005 citations


Journal ArticleDOI
TL;DR: In this article, a Bayesian version of P-spline is proposed for modeling nonlinear smooth effects of covariates within the additive and varying coefficient models framework, which is particularly useful in situations with changing curvature of the underlying smooth function or with highly oscillating functions.
Abstract: P-splines are an attractive approach for modeling nonlinear smooth effects of covariates within the additive and varying coefficient models framework. In this article, we first develop a Bayesian version for P-splines and generalize in a second step the approach in various ways. First, the assumption of constant smoothing parameters can be replaced by allowing the smoothing parameters to be locally adaptive. This is particularly useful in situations with changing curvature of the underlying smooth function or with highly oscillating functions. In a second extension, one-dimensional P-splines are generalized to two-dimensional surface fitting for modeling interactions between metrical covariates. In a last step, the approach is extended to situations with spatially correlated responses allowing the estimation of geoadditive models. Inference is fully Bayesian and uses recent MCMC techniques for drawing random samples from the posterior. In a couple of simulation studies the performance of Bayesian P-spline...

889 citations


Book
07 Dec 2004
TL;DR: This chapter reviews Bayesian advances in survival analysis and discusses the various semiparametric modeling techniques that are now commonly used, with a focus on proportional hazards models.
Abstract: Great strides in the analysis of survival data using Bayesian methods have been made in the past ten years due to advances in Bayesian computation and the feasibility of such methods. In this chapter, we review Bayesian advances in survival analysis and discuss the various semiparametric modeling techniques that are now commonly used. We review parametric and semiparametric approaches to Bayesian survival analysis, with a focus on proportional hazards models. Reference to other types of models are also given. Keywords: beta process; Cox model; Dirichlet process; gamma process; Gibbs sampling; piecewise exponential model; Weibull model

858 citations


Book ChapterDOI
01 Jan 2004
TL;DR: A novel “Bayes-frequentist compromise” is proposed that combines honest subjective non- or semiparametric Bayesian inference with good frequentist behavior, even in cases where the model is so large and the likelihood function so complex that standard Bayes procedures have poor frequentist performance.
Abstract: I describe two new methods for estimating the optimal treatment regime (equivalently, protocol, plan or strategy) from very high dimesional observational and experimental data: (i) g-estimation of an optimal double-regime structural nested mean model (drSNMM) and (ii) g-estimation of a standard single regime SNMM combined with sequential dynamic-programming (DP) regression. These methods are compared to certain regression methods found in the sequential decision and reinforcement learning literatures and to the regret modelling methods of Murphy (2003). I consider both Bayesian and frequentist inference. In particular, I propose a novel “Bayes-frequentist compromise” that combines honest subjective non- or semiparametric Bayesian inference with good frequentist behavior, even in cases where the model is so large and the likelihood function so complex that standard (uncompromised) Bayes procedures have poor frequentist performance.

595 citations


Journal ArticleDOI
TL;DR: In this paper, a Bayesian probabilistic approach is presented for selecting the most plausible class of models for a structural or mechanical system within some specified set of model classes, based on system response data.
Abstract: A Bayesian probabilistic approach is presented for selecting the most plausible class of models for a structural or mechanical system within some specified set of model classes, based on system response data. The crux of the approach is to rank the classes of models based on their probabilities conditional on the response data which can be calculated based on Bayes’ theorem and an asymptotic expansion for the evidence for each model class. The approach provides a quantitative expression of a principle of model parsimony or of Ockham’s razor which in this context can be stated as "simpler models are to be preferred over unnecessarily complicated ones." Examples are presented to illustrate the method using a single-degree-of-freedom bilinear hysteretic system, a linear two-story frame, and a ten-story shear building, all of which are subjected to seismic excitation.

529 citations


Journal ArticleDOI
TL;DR: The underlying advantages of Bayesian approaches are putting them at the forefront of genetic data analysis in an increasing number of areas.
Abstract: Bayesian statistics allow scientists to easily incorporate prior knowledge into their data analysis. Nonetheless, the sheer amount of computational power that is required for Bayesian statistical analyses has previously limited their use in genetics. These computational constraints have now largely been overcome and the underlying advantages of Bayesian approaches are putting them at the forefront of genetic data analysis in an increasing number of areas.

Journal ArticleDOI
TL;DR: A Bayesian network integrating models of the various processes involved in eutrophication in the Neuse River estuary, North Carolina is described, suggesting that a compromise is necessary between policy relevance and predictive precision, and that, to select defensible environmental management strategies, public officials must adopt decision-making methods that deal explicitly with scientific uncertainty.

Journal ArticleDOI
TL;DR: In this article, the authors embark upon a rather idiosyncratic walk through some of the fundamental philosophical and pedagogical issues at stake in the Bayesian or frequentist paradigm. But they also recognize that each approach has a great deal to contribute to statistical practice and each is essential for full development of the other approach.
Abstract: Statistics has struggled for nearly a century over the issue of whether the Bayesian or frequentist paradigm is superior. This debate is far from over and, indeed, should continue, since there are fundamental philosophical and pedagogical issues at stake. At the methodological level, however, the debate has become considerably muted, with the recognition that each approach has a great deal to contribute to statistical practice and each is actually essential for full development of the other approach. In this article, we embark upon a rather idiosyncratic walk through some of these issues.

Journal Article
TL;DR: This work presents an algorithm that computes the exact posterior probability of a subnetwork, e.g., a directed edge, and shows that also in domains with a large number of variables, exact computation is feasible, given suitable a priori restrictions on the structures.
Abstract: Learning a Bayesian network structure from data is a well-motivated but computationally hard task. We present an algorithm that computes the exact posterior probability of a subnetwork, e.g., a directed edge; a modified version of the algorithm finds one of the most probable network structures. This algorithm runs in time O(n 2n + nk+1C(m)), where n is the number of network variables, k is a constant maximum in-degree, and C(m) is the cost of computing a single local marginal conditional likelihood for m data instances. This is the first algorithm with less than super-exponential complexity with respect to n. Exact computation allows us to tackle complex cases where existing Monte Carlo methods and local search procedures potentially fail. We show that also in domains with a large number of variables, exact computation is feasible, given suitable a priori restrictions on the structures; combining exact and inexact methods is also possible. We demonstrate the applicability of the presented algorithm on four synthetic data sets with 17, 22, 37, and 100 variables.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the various proposals from a Bayesian decision-theoretic perspective for model selection from both frequentist and Bayesian perspectives, and propose a method for selecting the best model.
Abstract: Model selection is an important part of any statistical analysis and, indeed, is central to the pursuit of science in general. Many authors have examined the question of model selection from both frequentist and Bayesian perspectives, and many tools for selecting the “best model” have been suggested in the literature. This paper considers the various proposals from a Bayesian decision–theoretic perspective.

Journal ArticleDOI
TL;DR: It is shown that when branch lengths are unknown, it is better first to estimate branch lengths and then to estimate site-specific rates, and that a Bayesian approach is superior to maximum-likelihood under a wide range of conditions.
Abstract: The degree to which an amino acid site is free to vary is strongly dependent on its structural and functional importance. An amino acid that plays an essential role is unlikely to change over evolutionary time. Hence, the evolutionary rate at an amino acid site is indicative of how conserved this site is and, in turn, allows evaluation of its importance in maintaining the structure/function of the protein. When using probabilistic methods for site-specific rate inference, few alternatives are possible. In this study we use simulations to compare the maximum-likelihood and Bayesian paradigms. We study the dependence of inference accuracy on such parameters as number of sequences, branch lengths, the shape of the rate distribution, and sequence length. We also study the possibility of simultaneously estimating branch lengths and site-specific rates. Our results show that a Bayesian approach is superior to maximum-likelihood under a wide range of conditions, indicating that the prior that is incorporated into the Bayesian computation significantly improves performance. We show that when branch lengths are unknown, it is better first to estimate branch lengths and then to estimate site-specific rates. This procedure was found to be superior to estimating both the branch lengths and site-specific rates simultaneously. Finally, we illustrate the difference between maximum-likelihood and Bayesian methods when analyzing site-conservation for the apoptosis regulator protein Bcl-x(L).

Journal ArticleDOI
TL;DR: The “hint of quantum mechanics” via commuting operators reminded me that I never really understood my undergraduate course in quantum mechanics, and this is not a text from which to learn Bayesian methods.
Abstract: (2004). Bayesian Methods for Nonlinear Classification and Regression. Technometrics: Vol. 46, No. 2, pp. 251-252.


Journal ArticleDOI
TL;DR: The conclusion is: for data that are normally distributed, the Bayesian approach can be used with small sample sizes, whilst ML cannot.
Abstract: The main objective of this article is to investigate the empirical performances of the Bayesian approach in analyzing structural equation models with small sample sizes. The traditional maximum likelihood (ML) is also included for comparison. In the context of a confirmatory factor analysis model and a structural equation model, simulation studies are conducted with the different magnitudes of parameters and sample sizes n = da, where d = 2, 3, 4 and 5, and a is the number of unknown parameters. The performances are evaluated in terms of the goodness-of-fit statistics, and various measures on the accuracy of the estimates. The conclusion is: for data that are normally distributed, the Bayesian approach can be used with small sample sizes, whilst ML cannot.

01 Jan 2004
TL;DR: Extensions of penalized spline generalized additive models for analyzing space-time regression data and study them from a Bayesian per- spective using MCMC techniques is proposed.
Abstract: We propose extensions of penalized spline generalized additive models for analyzing space-time regression data and study them from a Bayesian per- spective. Non-linear effects of continuous covariates and time trends are modelled through Bayesian versions of penalized splines, while correlated spatial effects follow a Markov random field prior. This allows to treat all functions and effects within a unified general framework by assigning appropriate priors with different forms and degrees of smoothness. Inference can be performed either with full (FB) or empiri- cal Bayes (EB) posterior analysis. FB inference using MCMC techniques is a slight extension of previous work. For EB inference, a computationally efficient solution is developed on the basis of a generalized linear mixed model representation. The second approach can be viewed as posterior mode estimation and is closely related to penalized likelihood estimation in a frequentist setting. Variance components, corresponding to inverse smoothing parameters, are then estimated by marginal likelihood. We carefully compare both inferential procedures in simulation studies and illustrate them through data applications. The methodology is available in the open domain statistical package BayesX and as an S-plus/R function.

Journal ArticleDOI
TL;DR: Recently introduced Bayesian statistical methods enable the study of character evolution while simultaneously accounting for both phylogenetic and mapping uncertainty, adding much needed credibility to the reconstruction of evolutionary history.
Abstract: Much recent progress in evolutionary biology is based on the inference of ancestral states and past transformations in important traits on phylogenetic trees. These exercises often assume that the tree is known without error and that ancestral states and character change can be mapped onto it exactly. In reality, there is often considerable uncertainty about both the tree and the character mapping. Recently introduced Bayesian statistical methods enable the study of character evolution while simultaneously accounting for both phylogenetic and mapping uncertainty, adding much needed credibility to the reconstruction of evolutionary history.

Book
18 Jun 2004
TL;DR: The Bayesian Algorithm, a Treatise on Bayesian Calculations and Model Checking, and its Applications: Randomized, Controlled and Observational Data.
Abstract: Introduction. 1. The Bayesian Algorithm. 2. Prediction and Model Checking. 3. Linear Regression. 4. Bayesian Calculations. 5. Nonlinear Regression Models. 6. Randomized, Controlled and Observational Data. 7. Models for Panel Data. 8. Instrumental Variables. 9. Some Time Series Models. Appendix 1: A Conversion Manual. Appendix 2: Programming. Appendix 3: BUGS. Index

Journal ArticleDOI
TL;DR: This study used an empirical example based on 100 mitochondrial genomes from higher teleost fishes to compare the accuracy of parsimony-based jackknife values with Bayesian support values, and found that the higher BayesianSupport values are inappropriate and should not be interpreted as probabilities that clades are correctly resolved.
Abstract: In this study, we used an empirical example based on 100 mitochondrial genomes from higher teleost fishes to compare the accuracy of parsimony-based jackknife values with Bayesian support values. Phylogenetic analyses of 366 partitions, using differential taxon and character sampling from the entire data matrix of 100 taxa and 7,990 characters, were performed for both phylogenetic methods. The tree topology and branch-support values from each partition were compared with the tree inferred from all taxa and characters. Using this approach, we quantified the accuracy of the branch-support values assigned by the jackknife and Bayesian methods, with respect to each of 15 basal clades. In comparing the jackknife and Bayesian methods, we found that (1) both measures of support differ significantly from an ideal support index; (2) the jackknife underestimated support values; (3) the Bayesian method consistently overestimated support; (4) the magnitude by which Bayesian values overestimate support exceeds the magnitude by which the jackknife underestimates support; and (5) both methods performed poorly when taxon sampling was increased and character sampling was not increases. These results indicate that (1) the higher Bayesian support values are inappropriate (in magnitude), and (2) Bayesian support values should not be interpreted as probabilities that clades are correctly resolved. We advocate the continued use of the relatively conservative bootstrap and jackknife approaches to estimating branch support rather than the more extreme overestimates provided by the Markov Chain Monte Carlo-based Bayesian methods.

Journal ArticleDOI
TL;DR: This work proposes a method for optimal portfolio selection using a Bayesian decision theoretic framework that addresses two major shortcomings of the traditional Markowitz approach: the ability to handle higher moments and parameter uncertainty, and employs the skew normal distribution.
Abstract: We propose a method for optimal portfolio selection using a Bayesian decision theoretic framework that addresses two major shortcomings of the Markowitz approach: the ability to handle higher moments and estimation error. We employ the skew normal distribution which has many attractive features for modeling multivariate returns. Our results suggest that it is important to incorporate higher order moments in portfolio selection. Further, our comparison to other methods where parameter uncertainty is either ignored or accommodated in an ad hoc way, shows that our approach leads to higher expected utility than the resampling methods that are common in the practice of finance.

Journal ArticleDOI
TL;DR: The importance of proper model assumption in the context of Bayesian phylogenetics is studied by examining >5,000 Bayesian analyses and six nested models of nucleotide substitution to assure that the most appropriate model is assumed by employing both a priori model choice methods and a posteriori model adequacy tests.
Abstract: We studied the importance of proper model assumption in the context of Bayesian phylogenetics by examining >5,000 Bayesian analyses and six nested models of nucleotide substitution. Model misspecification can strongly bias bipar- tition posterior probability estimates. These biases were most pronounced when rate heterogeneity was ignored. The type of bias seen at a particular bipartition appeared to be strongly influenced by the lengths of the branches surrounding that bipartition. In the Felsenstein zone, posterior probability estimates of bipartitions were biased when the assumed model was underparameterized but were unbiased when the assumed model was overparameterized. For the inverse Felsenstein zone, however, both underparameterization and overparameterization led to biased bipartition posterior probabilities, although the bias caused by overparameterization was less pronounced and disappeared with increased sequence length. Model parameter estimates were also affected by model misspecification. Underparameterization caused a bias in some parameter estimates, such as branch lengths and the gamma shape parameter, whereas overparameterization caused a decrease in the precision of some parameter estimates. We caution researchers to assure that the most appropriate model is assumed by employing both a priori model choice methods and a posteriori model adequacy tests. (Bayesian phylogenetic inference; convergence; Markov chain Monte Carlo; maximum likelihood; model choice; posterior probability.)

BookDOI
TL;DR: Two results are motivate and review two results that generalize de Finetti's theorem to the quantum mechanical setting: a definetti theorem for quantum states and a de Fintti theorem forquantum operations.
Abstract: The classical de Finetti theorem provides an operational definition of the concept of an unknown probability in Bayesian probability theory, where probabilities are taken to be degrees of belief instead of objective states of nature. In this paper, we motivate and review two results that generalize de Finetti's theorem to the quantum mechanical setting: Namely a de Finetti theorem for quantum states and a de Finetti theorem for quantum operations. The quantum-state theorem, in a closely analogous fashion to the original de Finetti theorem, deals with exchangeable density-operator assignments and provides an operational definition of the concept of an "unknown quantum state" in quantum-state tomography. Similarly, the quantum-operation theorem gives an operational definition of an "unknown quantum operation" in quantum-process tomography. These results are especially important for a Bayesian interpretation of quantum mechanics, where quantum states and (at least some) quantum operations are taken to be states of belief rather than states of nature.

Journal ArticleDOI
TL;DR: The Bayesian Theory is used to formulate the Inverse Problem (IP) of the EEG/MEG by considering a third level of inference that has been systematically omitted by previous Bayesian formulations of the IP, known as Bayesian model averaging (BMA).

Journal ArticleDOI
TL;DR: It is shown that a network architecture commonly used to model the cerebral cortex can implement Bayesian inference for an arbitrary hidden Markov model, and a new interpretation of cortical activities in terms of log posterior probabilities of stimuli occurring in the natural world is introduced.
Abstract: A large number of human psychophysical results have been successfully explained in recent years using Bayesian models. However, the neural implementation of such models remains largely unclear. In this article, we show that a network architecture commonly used to model the cerebral cortex can implement Bayesian inference for an arbitrary hidden Markov model. We illustrate the approach using an orientation discrimination task and a visual motion detection task. In the case of orientation discrimination, we show that the model network can infer the posterior distribution over orientations and correctly estimate stimulus orientation in the presence of significant noise. In the case of motion detection, we show that the resulting model network exhibits direction selectivity and correctly computes the posterior probabilities over motion direction and position. When used to solve the well-known random dots motion discrimination task, the model generates responses that mimic the activities of evidence-accumulating neurons in cortical areas LIP and FEF. The framework we introduce posits a new interpretation of cortical activities in terms of log posterior probabilities of stimuli occurring in the natural world.

Journal ArticleDOI
TL;DR: Simulation results demonstrate that the proposed new hierarchical Bayesian method appropriately resolves the inverse problem even if fMRI data convey inaccurate information, while the Wiener filter method is seriously deteriorated by inaccurate fMRI information.

Journal ArticleDOI
TL;DR: In this paper, the authors present a model from geotechnical data and show that the distinction between the trend or systematic error and the spatial error is a modeling choice, not a property of nature.
Abstract: Uncertainty and risk are central features of geotechnical and geological engineering. Engineers can deal with uncertainty by ignoring it, by being conservative, by using the observational method, or by quantifying it. In recent years, reliability analysis and probabilistic methods have found wide application in geotechnical engineering and related fields. The tools are well known, including methods of reliability analysis and decision trees. Analytical models for deterministic geotechnical applications are also widely available, even if their underlying reliability is sometimes suspect. The major issues involve input and output. In order to develop appropriate input, the engineer must understand the nature of uncertainty and probability. Most geotechnical uncertainty reflects lack of knowledge, and probability based on the engineer’s degree of belief comes closest to the profession’s practical approach. Bayesian approaches are especially powerful because they provide probabilities on the state of nature rather than on the observations. The first point in developing a model from geotechnical data is that the distinction between the trend or systematic error and the spatial error is a modeling choice, not a property of nature. Second, properties estimated from small samples may be seriously in error, whether they are used probabilistically or deterministically. Third, experts generally estimate mean trends well but tend to underestimate uncertainty and to be overconfident in their estimates. In this context, engineering judgment should be based on a demonstrable chain of reasoning and not on speculation. One difficulty in interpreting results is that most people, including engineers, have difficulty establishing an allowable probability of failure or dealing with low values of probability. The \IF-N\N plot is one useful vehicle for comparing calculated probabilities with observed frequencies of failure of comparable facilities. In any comparison it must be noted that a calculated probability is a lower bound because it must fail to incorporate the factors that are ignored in the analysis. It is useful to compare probabilities of failure for alternative designs, and the reliability methods reveal the contributions of different components to the uncertainty in the probability of failure. Probability is not a property of the world but a state of mind; geotechnical uncertainty is primarily epistemic, Bayesian, and belief based. The current challenges to the profession are to make use of probabilistic methods in practice and to sharpen our investigations and analyses so that each additional data point provides maximal information.