scispace - formally typeset
Search or ask a question

Showing papers on "Bayesian inference published in 1990"


Journal ArticleDOI
TL;DR: The use of the Gibbs sampler as a method for calculating Bayesian marginal posterior and predictive densities is reviewed and illustrated with a range of normal data models, including variance components, unordered and ordered means, hierarchical growth curves, and missing data in a crossover trial.
Abstract: The use of the Gibbs sampler as a method for calculating Bayesian marginal posterior and predictive densities is reviewed and illustrated with a range of normal data models, including variance components, unordered and ordered means, hierarchical growth curves, and missing data in a crossover trial. In all cases the approach is straightforward to specify distributionally and to implement computationally, with output readily adapted for required inference summaries.

1,020 citations



Book ChapterDOI
01 Jan 1990
TL;DR: The Bayesian approach to probability theory is presented as an alternative to the currently used long-run relative frequency approach, and Bayesian solutions to two interesting astrophysical problems are outlined: the measurement of weak signals in a strong background and the analysis of the neutrinos detected from supernova SN 1987A.
Abstract: The Bayesian approach to probability theory is presented as an alternative to the currently used long-run relative frequency approach, which does not offer clear, compelling criteria for the design of statistical methods. Bayesian probability theory offers unique and demonstrably optimal solutions to well-posed statistical problems, and is historically the original approach to statistics. The reasons for earlier rejection of Bayesian methods are discussed, and it is noted that the work of Cox, Jaynes, and others answers earlier objections, giving Bayesian inference a firm logical and mathematical foundation as the correct mathematical language for quantifying uncertainty. The Bayesian approaches to parameter estimation and model comparison are outlined and illustrated by application to a simple problem based on the gaussian distribution. As further illustrations of the Bayesian paradigm, Bayesian solutions to two interesting astrophysical problems are outlined: the measurement of weak signals in a strong background, and the analysis of the neutrinos detected from supernova SN 1987A. A brief bibliography of astrophysically interesting applications of Bayesian inference is provided.

207 citations


Journal ArticleDOI
TL;DR: In this article, Bayesian Statistics: Principles, Models, and Applications are presented. But they do not cover the application of Bayesian statistics in the field of computer vision, which is different from ours.
Abstract: (1990). Bayesian Statistics: Principles, Models, and Applications. Technometrics: Vol. 32, No. 4, pp. 453-454.

200 citations


Journal ArticleDOI
TL;DR: This paper reviews and critique various statistical approaches that have been proposed for the design and analysis of sequential experiments in medical applications, including group sequential tests, stochastic curtailment, repeated confidence intervals, and Bayesian procedures.
Abstract: Most medical trials are monitored for early evidence of treatment differences or harmful side effects. In this paper we review and critique various statistical approaches that have been proposed for the design and analysis of sequential experiments in medical applications. We discuss group sequential tests, stochastic curtailment, repeated confidence intervals, and Bayesian procedures. The role that a statistical stopping rule should play in the final analysis is examined.

168 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compare the predictions of a theoretical model of a common knowledge inference process with actual behavior, and find that the theoretical model roughly predicts the observed behavior, but the actual inference process is clearly less efficient than the standard of the theoretical models.
Abstract: This paper reports on an experimental study of-the way in which individuals make inferences from publicly available information. We compare the predictions of a theoretical model of a common knowledge inference process with actual behavior. In the theoretical model, "perfect Bayesians," starting with private information, take actions; an aggregate statistic is made publicly available; the individuals do optimal Bayesian updating and take new actions; and the process continues until there is a common knowledge equilibrium with complete information pooling. We find that the theoretical model roughly predicts the observed behavior, but the actual inference process is clearly less efficient than the standard of the theoretical model, and while there is some pooling, it is incomplete.

167 citations


Journal ArticleDOI
TL;DR: An upper bound for the posterior probability of a measurable set A when the prior lies in a class of probability measures P, which is a rational function of two Choquet integrals if P is weakly compact and is closed with respect to majorization.
Abstract: We give an upper bound for the posterior probability of a measurable set $A$ when the prior lies in a class of probability measures $\mathscr{P}$. The bound is a rational function of two Choquet integrals. If $\mathscr{P}$ is weakly compact and is closed with respect to majorization, then the bound is sharp if and only if the upper prior probability is 2-alternating. The result is used to compute bounds for several sets of priors used in robust Bayesian inference. The result may be regarded as a characterization of 2-alternating Choquet capacities.

152 citations


Journal ArticleDOI
TL;DR: This work tests the mean-variance efficiency of a given portfolio with a Bayesian framework using a multivariate regression model and uses Monte Carlo numerical integration to accurately evaluate 90-dimensional integrals.

115 citations


Journal ArticleDOI
TL;DR: The first five sections of the paper describe the Bayesian paradigm for statistics and its relationship with other attitudes towards inference and an attempt is made to appreciate how accurate formulae like the extension of the conversation, the product law and Bayes rule are in evaluating probabilities.
Abstract: The first five sections of the paper describe the Bayesian paradigm for statistics and its relationship with other attitudes towards inference. Section 1 outlines Wald's major contributions and explains how they omit the vital consideration of coherence. When this point is included the Bayesian view results, with the main difference that Waldean ideas require the concept of the sample space, whereas the Bayesian approach may dispense with it, using a probability distribution over parameter space instead. Section 2 relates statistical ideas to the problem of inference in science. Scientific inference is essentially the passage from observed, past data to unobserved, future data. The roles of models and theories in doing this are explored. The Bayesian view is that all this should be accomplished entirely within the calculus of probability and Section 3 justifies this choice by various axiom systems. The claim is made that this leads to a quite different paradigm from that of classical statistics and, in particular, prob- lems in the latter paradigm cease to have importance within the other. Point estimation provides an illustration. Some counter-examples to the Bayesian view are discussed. It is important that statistical conclusions should be usable in making decisions. Section 4 explains how the Bayesian view achieves this practi- cality by introducing utilities and the principle of maximizing expected utility. Practitioners are often unhappy with the ideas of basing inferences on one number, probability, or action on another, an expectation, so these points are considered and the methods justified. Section 5 discusses why the Bayesian viewpoint has not achieved the success that its logic suggests. Points discussed include the relationship between the inferences and the practical situation, for example with multiple comparisons; and the lack of the need to confine attention to normality or the exponential family. Its extensive use by nonstatisticians is documented. The most important objection to the Bayesian view is that which rightly says that probabilities are hard to assess. Consequently Section 6 considers how this might be done and an attempt is made to appreciate how accurate formulae like the extension of the conversation, the product law and Bayes rule are in evaluating probabilities.

106 citations


Journal ArticleDOI
TL;DR: Here Bayesian analysis is extended to the problem of selecting the model which is most probable in view of the data and all the prior information, and in addition to the analytic calculation, two examples are given.

100 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that the mathematical structure of belief functions makes them suitable for generating classes of prior distributions to be used in robust Bayesian inference, and they also extend an integral representation given by Dempster to infinite sets.
Abstract: We show that the mathematical structure of belief functions makes them suitable for generating classes of prior distributions to be used in robust Bayesian inference. In particular, the upper and lower bounds of the posterior probability content of a measurable subset of the parameter space may be calculated directly in terms of upper and lower expectations (Theorem 4.1). We also extend an integral representation given by Dempster to infinite sets (Theorem 2.1).

Book
10 Oct 1990
TL;DR: In this article, a posterior analysis based on distributions for robust maximum likelihood type estimates is proposed for reconstruction of digital images. But it is not suitable for the reconstruction of 3D images.
Abstract: Basic concepts.- Bayes' Theorem.- Prior density functions.- Point estimation.- Confidence regions.- Hypothesis testing.- Predictive analysis.- Numerical techniques.- Models and special applications.- Linear models.- Nonlinear models.- Mixed models.- Linear models with unknown variance and covariance components.- Classification.- Posterior analysis based on distributions for robust maximum likelihood type estimates.- Reconstruction of digital images.

Book ChapterDOI
01 Jan 1990
TL;DR: The sequence of processing decisions derived from evaluating the diagrams at each stage is the same as the sequence that would have been derived by evaluating the final influence diagram that contains all random variables created during the run of the vision system.
Abstract: We show an approach to automated control of machine vision systems based on incremental creation and evaluation of a particular family of influence diagrams that represent hypotheses of imagery interpretation and possible subsequent processing decisions. In our approach, model-based machine vision techniques are integrated with hierarchical Bayesian inference to provide a framework for representing and matching instances of objects and relationships in imagery and for accruing probabilities to rank order conflicting scene interpretations. We extend a result of Tatman and Shachter to show that the sequence of processing decisions derived from evaluating the diagrams at each stage is the same as the sequence that would have been derived by evaluating the final influence diagram that contains all random variables created during the run of the vision system.

Book ChapterDOI
01 Jan 1990
TL;DR: An empirical evaluation of three inference methods for uncertain reasoning is presented in the context of Pathfinder, a large expert system for the diagnosis of lymph-node pathology.
Abstract: In this paper, an empirical evaluation of three inference methods for uncertain reasoning is presented in the context of Pathfinder, a large expert system for the diagnosis of lymph-node pathology. The inference procedures evaluated are (1) Bayes' theorem, assuming evidence is conditionally independent given each hypothesis; (2) odds–likelihood updating, assuming evidence is conditionally independent given each hypothesis and given the negation of each hypothesis; and (3) a inference method related to the Dempster–Shafer theory of belief. Both expert-rating and decision-theoretic metrics are used to compare the diagnostic accuracy of the inference methods.

Book ChapterDOI
02 Jul 1990
TL;DR: A new concept of conditional information is introduced, based on lower-upper probabilities definition, but introduces an estimation of the true probability distribution, by a method analogous to statistical maximum likelihood.
Abstract: In this paper, it is considered the concept of conditioning for a family of possible probability distributions. First, the most used definitions are reviewed, in particular, Dempster conditioning, and upper-lower probabilities conditioning. It is shown that the former has a tendency to be too informative, and the last, by the contrary, too uninformative. Another definitions are also considered, as weak and strong conditioning. After, a new concept of conditional information is introduced. It is based on lower-upper probabilities definition, but introduces an estimation of the true probability distribution, by a method analogous to statistical maximum likelihood.

Journal ArticleDOI
TL;DR: In this paper, elicited estimates of farmers' subjective beliefs about the mean and variance of wheat variety yields were used to test propositions about Bayesian learning developed in the recent literature on innovation adoption.
Abstract: In this study, elicited estimates of farmers' subjective beliefs about the mean and variance of wheat variety yields were used to test propositions about Bayesian learning developed in the recent literature on innovation adoption. A series of empirical tests of the Bayesian adoption model were conducted using beliefs elicited from farm surveys conducted in 1982, 1983 and 1984. The results of the analysis neither confirm nor reject the Bayesian approach as a model of how farmers revise subjective beliefs, but do raise serious doubts about its realism, and suggest some issues requiring further investigation. Shortcomings in the elicitation techniques are discussed and the assumptions of the Bayesian model are reviewed.

01 Jan 1990
TL;DR: In this paper, the product of spacings is proposed as an alternative to the likelihood in Bayesian inference without losing the structure and properties of the Bayesian method and has computational advantages.
Abstract: The product of spacings is suggested as an alternative to the likelihood in Bayesian inference. It is shown the product of spacings can be used in place of the likelihood in Bayesian inference without losing the structure and properties of the Bayesian method. The method is also shown to have computational advantages.

Book
01 Jan 1990
TL;DR: This chapter discusses Bayesian Decision Theory, which aims to explain the Foundations of Bayesian Inference, as well as its applications in classical and bayesian economics.
Abstract: BASIC PROBABILITY AND STATISTICS. The Nature of Statistics. Descriptive Statistics. Probability. Probability Distributions. Two Random Variables. INFERENCE FOR MEANS AND PROPORTIONS. Sampling. Point Estimation. Confidence Intervals. Hypothesis Testing. Analysis of Variance. REGRESSION: RELATING TWO OR MORE VARIABLES. Fitting a Line. Simple Regression. Multiple Regression. Regression Extensions. Correlation. TOPICS IN CLASSICAL AND BAYESIAN INFERENCE. Nonparametric and Robust Statistics. Chi Square Tests. Maximum Likelihood Estimation. Bayesian Inference. Bayesian Decision Theory. SPECIAL TOPICS FOR BUSINESS AND ECONOMICS. Decision Trees. Index Numbers. Sampling Designs. Time Series. Simultaneous Equations. Appendices. Tables. References. Answers to Odd-Numbered Problems. Glossary of Common Symbols. Index.

Book ChapterDOI
01 Jan 1990
TL;DR: This study examined the correspondence between changes in probabilities and relative probability phrases, and found the most descriptively accurate of these three to be that each such phrase corresponds to a fixed difference in probability (rather than fixed ratio of probabilities or of odds).
Abstract: Bayesian inference systems should be able to explain their reasoning to users, translating from numerical to natural language. Previous empirical work has investigated the correspondence between absolute probabilities and linguistic phrases. This study extends that work to the correspondence between changes in probabilities (updates) and relative probability phrases, such as “much more likely” or “a little less likely.” Subjects selected such phrases to best describe numerical probability updates. We examined three hypotheses about the correspondence, and found the most descriptively accurate of these three to be that each such phrase corresponds to a fixed difference in probability (rather than fixed ratio of probabilities or of odds). The empirically derived phrase selection function uses eight phrases and achieved a 72% accuracy in correspondence with the subjects’ actual usage.

Journal ArticleDOI
TL;DR: In this article, a conditional approach to evaluate the observed level of significance is developed, and an importance sampling technique is used to improve the approximation and assess the accuracy of the conditional approximation to the marginal observed level.
Abstract: SUMMARY Analysis of nonnormal linear models leads to an initial conditioning on the standardized residuals, giving an unnormed density on Rk, where k is the number of parameters. To obtain an observed level of significance for a single parameter it is then necessary to calculate a marginal probability, thus requiring integration in k dimensions. In this paper a conditional approach to evaluating the observed level of significance is developed, and an importance sampling technique is used to improve the approximation and assess the accuracy of the conditional approximation to the marginal observed level. A further approximation based on the invariant version (Fraser, 1990) of the Lugannani & Rice (1980) formula is also proposed. The approach extends to the evaluation of real pivots and to Bayesian inference for a single parameter component.

Book
01 Dec 1990
TL;DR: In this article, the authors present a survey of the state-of-the-art methods for statistical inference in time series, including the following: A.Diebold - Factor Analysis I.G.Berger - Statistical Inference D.W.Nerlove and F.X.Lindley - Residuals F.R.Narens - Monte Carlo Methods J.O.Gastwirth - Outliers W.C.
Abstract: Statistical Inference in Time Series P.Whittle - ARIMA Models A.C.Harvey - Autoregressive and Moving Average Time Series Models M.Nerlove & F.X.Diebold - Bayesian Inference A.Zellner - Continuous and Discrete Time Models C.A.Sims - Edgeworth as a Statistician S.M.Stigler - Ergodic Theory W.Parry - Estimation M.Nerlove & F.X.Diebold - Factor Analysis I.Adelman - Ronald Aylmer Fisher A.W.F.Edwards - Forecasting C.W.J.Granger - Heteroskedasticity J.Kmenta - H.Hotelling K.J.Arrow - Hypothesis Testing G.C.Chow - Least Squares H.White - Likelihood A.W.F.Edwards - Martingales A.F.Karr - Maximum Likelihood R.L.Basmann - Meaningfulness and Invariance L.Narens & R.Duncan Luce - Mean Value S.H.Chew - Measurement R.Duncan Luce & L.Narens - Monte Carlo Methods J.G.Cragg - Multivariate Time Series Models C.A.Sims - Non-parametric Statistical Methods J.L.Gastwirth - Outliers W.S.Krasker - Prediction P.Whittle - Principal Components T.Kloek - Randomization J.O.Berger - Random Variables I.R.Savage - Regression and Correlation Analysis D.V.Lindley - Residuals F.J.Anscombe - Semiparametric Estimation S.R.Cosslett - Sequential Analysis J.O.Berger - Eugen Slutsky G.Gandolfo - Spectral Analysis C.W.Granger - Spline Functions D.J.Poirier - Stationary Time Series E.J.Hannan - Statistical Decision Theory J.O.Berger - Statistical Inference D.V.Lindley - Time Series Analysis M.Nerlove & F.X.Diebold - Transformation of Statistical Variables D.R.Cox - Abraham Wald E.R.Weintraub - Weiner Process A.G.Malliaris

Journal ArticleDOI
TL;DR: KNET as discussed by the authors is an environment for contructing probabilistic, knowledge-intensive systems within the axiomatic framework of decision theory, which defines a complete separation between the hypermedia user interface on the one hand, and the representation and management of expert opinion on the other.

Book ChapterDOI
01 Mar 1990
TL;DR: It is established that the only coherent preference schemes are the two agents' preferences themselves, with coherence defined by axiomatic restrictions on preferences over such acts.
Abstract: Questions of consensus among Bayesian investigators are discussed with regard to: (1) inference and (2) decisions. Concerning topic (1), results about asymptotic consensus and certainty with increasing evidence are reported. The findings deal with extending a conclusion of D. Blackwell and L. Dubins (1962) from pairs of agents to larger communities. Increasing evidence creates (almost sure) certainty of the truth and, depending upon the size of the community, it leads to varieties of consensus for conditional (posterior) distributions. Concerning topic (2), results are reported on the shared agreements of two Bayesian decision makers who have some differences in their probabilities for events and some differences in their utilities for outcomes. The results are couched in a setting where acts are so-called horse lotteries, with coherence defined by axiomatic restrictions on preferences over such acts. Subject to a weak Pareto condition, it is established that the only coherent preference schemes are the two agents' preferences themselves. Moreover, stronger Pareto conditions exclude even these extreme solutions. >

Journal ArticleDOI
TL;DR: In this article, the mean-variance efficiency of a portfolio with a Bayesian framework is evaluated using Monte Carlo numerical integration, and the sensitivity of the inferences to the prior is investigated using three distributions.
Abstract: We test the mean-variance efficiency of a given portfolio with a Bayesian framework. Our test is more direct than Shanken's (1987), because we impose a prior on all the parameters of the multivariate regression model. The approach is also easily adapted to other problems. We use Monte Carlo numerical integration to accurately evaluate 90-dimensional integrals. Posterior-odds ratios are calculated for 12 industry portfolios from 1926-1987. The sensitivity of the inferences to the prior is investigated using three distributions. The probability that the given portfolio is mean-variance efficient is small for a range of plausible priors. This is the working paper version of our 1990 Journal of Financial Economics article.

Journal Article
TL;DR: In this article, a Bayesian technique is used to update predictions of axial pile capacity on the basis of pile driving records, which enables one to systematically account for and combine objective information and subjective judgement.
Abstract: The paper outlines a Bayesian technique to update predictions of pile driving and axial pile capacity on the basis of pile driving records. The approach enables one to systematically account for and combine objective information and subjective judgement. The updated values then represent best estimates. The methodology and an example calculation are described. Recommendations for improvements are suggested.

Journal ArticleDOI
TL;DR: A theorem is given on bayesian estimation, from which the strong consistency of well-known information criteria with penalty terms is inferred, and the ‘best model structure’ is defined.
Abstract: Model structure estimation is first accommodated in the general framework of prediction error identification. The concept of the ‘best model structure’ is defined. A theorem is given on bayesian estimation, from which the strong consistency of well-known information criteria with penalty terms is inferred.

Book ChapterDOI
01 Jan 1990
TL;DR: The input rate and impulse response can both be characterised using positive additive distributions, which should be reconstructed from experimental data by a process of Bayesian inference using Skilling’s generalisation of the Shannon/Jaynes entropy.
Abstract: In many cases the processes by which a drug is handled in the body, once it has reached the bloodstream, are essentially linear at therapeutic doses. If so, then the concentration in blood after an oral dose may be considered as the convolution of the rate at which the drug reaches the bloodstream with the response of the body to an ‘impulse’ of the drug applied directly into the bloodstream. The input rate may be treated as a ‘blurred’ version of a positive additive distribution, where the ‘blurring’ reflects the diffusive processes which the drug must undergo during its transit from the point of dosing to the general circulation. The standard pharmacokinetic compartmental model of the body leads to an impulse response function which is of the form \( \sum olimits_i {{A_i}{e^{ - {\lambda _i}t}}} \). One can extend this model, and express the function in terms of a continuous distribution of time constants λ, rather than just a small number of discrete values. Thus the input rate and impulse response can both be characterised using positive additive distributions, which should be reconstructed from experimental data by a process of Bayesian inference using Skilling’s generalisation of the Shannon/Jaynes entropy.

Book ChapterDOI
TL;DR: In many expert systems, estimates are made of probabilities and use is made of Bayes’ Theorem; but in applying the theorem it is often assumed that some of the probabilities are independent.
Abstract: In many expert systems, estimates are made of probabilities and use is made of Bayes’ Theorem; but in applying the theorem it is often assumed that some of the probabilities are independent. These are sometimes known as “simple Bayes” models. One reason for assuming independence is that it is believed that without this assumption, the complexity of the calculations would become totally unmanageable as the number of pieces of evidence increases.

Journal ArticleDOI
TL;DR: The Likelihood Principle of Bayesian inference asserts that only likelihoods matter to single-stage inference; this has unfortunate implications; it does not permit the inputs to Bayesian arithmetic at all levels to be likelihood ratios.
Abstract: The Likelihood Principle of Bayesian inference asserts that only likelihoods matter to single-stage inference. A likelihood is the probability of evidence given a hypothesis multiplied by a positive constant. The constant cancels out of simple versions of Bayes's Theorem, and so is irrelevant to single-stage inferences. Most non-statistical inferences require a multi-stage path from evidence to hypotheses; testimony that an event occurred does not guarantee that in fact it did. Hierarchical Bayesian models explicate such cases. For such models, the Likelihood Principle applies to a collection of data elements treated as a single datum conditionally independent of other similar collections. It does not necessarily apply to a single data element taken alone. This has unfortunate implications; in particular, it does not permit the inputs to Bayesian arithmetic at all levels to be likelihood ratios. These issues are sorted out in the context of a trial in which one author is accused of murdering another, with the third as a key witness.

Proceedings ArticleDOI
17 Jun 1990
TL;DR: A scheme is presented for translating high-level descriptions of conceptual hierarchies into a neural network representation that provably approximates the behavior of this net under a stochastic simulation rule.
Abstract: A scheme is presented for translating high-level descriptions of conceptual hierarchies into a neural network representation. The intuitive semantics of a conceptual hierarchy is provided by a Bayesian net, and the neural network implementation provably approximates the behavior of this net under a stochastic simulation rule