scispace - formally typeset
Search or ask a question

Showing papers on "Bayesian inference published in 1986"


Book
01 Jan 1986
TL;DR: In this article, the authors present an integrative presentation of the principles of decision analysis in a behavioral context, including sensitivity analysis, value-utility distinction, multistage inference, attitudes toward risk, and attempt to make intuitive sense out of what have been treated in the literature as endemic biases and other errors of human judgement.
Abstract: Decision analysis is a technology designed to help individuals and organizations make wise inferences and decisions. It synthesises ideas from economics, statistics, psychology, operations research, and other disciplines. A great deal of behavioural research is relevant to decision analysis; behavioural scientists have both suggested easy and natural ways to describe and quantify problems and shown the kind of errors to which unaided intuitive judgements can lead. This long-awaited book offers the4first integrative presentation of the principles of decision analysis in a behavioural context. The authors break new ground on a variety of technical topics (sensitivity analysis, the value-utility distinction, multistage inference, attitudes toward risk), and attempt to make intuitive sense out of what have been treated in the literature as endemic biases and other errors of human judgement. Those interested in artificial intelligence will find it the easiest presentation of hierarchical Bayesian inference available.

2,616 citations


Journal ArticleDOI
TL;DR: In this paper, the authors test a variety of such methods in the context of combining forecasts of GNP from four major econometric models and find that a simple average, the normal model with an independence assumption, and the Bayesian model perform better than the other approaches that are studied here.
Abstract: A method for combining forecasts may or may not account for dependence and differing precision among forecasts. In this article we test a variety of such methods in the context of combining forecasts of GNP from four major econometric models. The methods include one in which forecasting errors are jointly normally distributed and several variants of this model as well as some simpler procedures and a Bayesian approach with a prior distribution based on exchangeability of forecasters. The results indicate that a simple average, the normal model with an independence assumption, and the Bayesian model perform better than the other approaches that are studied here.

343 citations


Journal ArticleDOI
TL;DR: In this article, the authors argue that most scientific data analysis is carried out in a non-Bayesian framework and present some practical examples of data analysis in which the Bayesian approach is difficult but Fisherian/frequentist solutions are relatively easy.
Abstract: Originally a talk delivered at a conference on Bayesian statistics, this article attempts to answer the following question: why is most scientific data analysis carried out in a non-Bayesian framework? The argument consists mainly of some practical examples of data analysis, in which the Bayesian approach is difficult but Fisherian/frequentist solutions are relatively easy. There is a brief discussion of objectivity in statistical analyses and of the difficulties of achieving objectivity within a Bayesian framework. The article ends with a list of practical advantages of Fisherian/frequentist methods, which so far seem to have outweighed the philosophical superiority of Bayesianism.

309 citations


Journal ArticleDOI
John Geweke1
TL;DR: In this paper, the inequality constrained normal linear regression model is approached as a problem in Bayesian inference, using a prior that is the product of a conventional uninformative distribution and an indicator function representing the inequality constraints.
Abstract: Inference in the inequality constrained normal linear regression model is approached as a problem in Bayesian inference, using a prior that is the product of a conventional uninformative distribution and an indicator function representing the inequality constraints. The posterior distribution is calculated using Monte Carlo numerical integration, which leads directly to the evaluation of expected values of functions of interest. This approach is compared with others that have been proposed. Three empirical examples illustrate the utility of the proposed methods using an inexpensive 32-bit microcomputer.

262 citations


Journal ArticleDOI
TL;DR: This paper argues that recent research on normal-belief formation is relevant to the understanding of the establishment and maintenance of delusions and the extent to which the distortions of cognitive processes associated with delusions are content-specific or mood-specific.
Abstract: This paper argues that recent research on normal-belief formation is relevant to our understanding of the establishment and maintenance of delusions. Bayesian theory provides a normative model of the way in which evidence relevant to normal beliefs may be evaluated: this makes it possible to classify delusional beliefs in terms of deviations from optimal Bayesian inference. Some hypothetical forms of deviation appear to correspond closely to cognitive processes observed in some groups of deluded patients. Theories of the precise nature of the abnormal judgemental processes also have implications for psychological approaches to treatment of deluded patients. The role of hallucinations in the formation and/or maintenance of delusions and the extent to which the distortions of cognitive processes associated with delusions are content-specific or mood-specific are also considered.

187 citations


Book ChapterDOI
TL;DR: In this paper, a discussion of four results about the principle of maximizing entropy and its connections with Bayesian theory is presented. But the results are restricted to the case where all empirical constraints imposed on the MAXENT solution are satisfied in each measure space.
Abstract: This essay is, primarily, a discussion of four results about the principle of maximizing entropy (MAXENT) and its connections with Bayesian theory. Result 1 provides a restricted equivalence between the two where the Bayesian model for MAXENT inference uses an a priori probability that is uniform, and where all MAXENT constraints are limited to 0–1 expectations for simple indicator-variables. The other three results report on an inability to extend the equivalence beyond these specialized constraints. Result 2 establishes a sensitivity of MAXENT inference to the choice of the algebra of possibilities even though all empirical constraints imposed on the MAXENT solution are satisfied in each measure space considered. The resulting MAXENT distribution is not invariant over the choice of measure space. Thus, old and familiar problems with the Laplacean principle of Insufficient Reason also plague MAXENT theory. Result 3 builds upon the findings of Friedman and Shimony (1971,1973) and demonstrates the absence of an exchangeable, Bayesian model for predictive MAXENT distributions when the MAXENT constraints are interpreted according to Jaynes’ (1978) prescription for his (1963) Brandeis Dice problem. Last, Result 4 generalizes the Friedman and Shimony objection to cross-entropy (Kullback-information) shifts subject to a constraint of a new odds-ratio for two disjoint events.

112 citations


Journal ArticleDOI
Mike West1
TL;DR: In this article, an approche bayesienne basee sur des comparaisons des predictions a partir du modele standard avec celles d'un modele alternatif simple is presented.
Abstract: On presente une approche bayesienne basee sur des comparaisons des predictions a partir du modele standard avec celles d'un modele alternatif simple

89 citations


Journal ArticleDOI
TL;DR: In this article, a unified view of previous work involving univariate and bivariate models with some new results pertaining to mixtures, form-invariance and Bayesian inference is presented.
Abstract: This paper provides a brief structural perspective of discrete weighted distributions in theory and practice.. It develops a unified view of previous work involving univariate and bivariate models with some new results pertaining to mixtures, form-invariance and Bayesian inference

59 citations


Journal ArticleDOI
TL;DR: It is shown how one may carry out 'case-control studies' without necessarily having a control group, and the methods of Bayesian inference are outlined, with the data that first showed the relationship between in utero exposure to diethylstilbestrol and cancer of the vagina in young girls.
Abstract: We outline the methods of Bayesian inference for applications to case-control studies. These methods appear as the natural way of making inferences, since much of the controversy that surrounds a specific case-control study is subjective. We derive conjugate prior distributions of exposure, posterior distributions of the ratio of the odds of being incident with a disease both with and without exposure to a potential causal agent, and convenient approximations. In particular, we show how one may carry out ‘case-control studies’ without necessarily having a control group. We illustrate these ideas with the data that first showed the relationship between in utero exposure to diethylstilbestrol and cancer of the vagina in young girls

54 citations


Journal ArticleDOI
TL;DR: Fisher's logic is not consistent with Bayes' theorem as mentioned in this paper, which is the best known theory for tests of significance, due largely to Fisher's theory is very well received.
Abstract: The best (most widely) received theory for tests of significance is that due largely to Fisher. Embellished with Neyman's mathematics, Fisher's theory is very well received. But Fisher's logic is not consistent with Bayes' theorem. And Bayes' theorem is beyond reproach. Thus, Fisher's logic is deficient. However, in practice, there is often some redress. Indeed, sometimes Fisher's level of significance P coincides mathematically with the posterior probability of the null hypothesis, i.e. P=p(hOIE), where E is the sample event (evidence). More generally, a good Fisherian tends intuitively (although certainly not inevitably) toward the inference he would make if he employed Bayes' theorem with explicit subjective priors. In effect, he is almost Bayesian. 1 In theory

53 citations


Book
01 Feb 1986
TL;DR: Introduction Overview of Classical Methods Probability and Its Interpretation Bayes' Theorem Principles of Bayesian Inference Normal Means and Variances Inference in Linear Regression Inference for Binomial Proportions Decision Analysis Summary and Perspective
Abstract: Introduction Overview of Classical Methods Probability and Its Interpretation Bayes' Theorem Principles of Bayesian Inference Normal Means and Variances Inference in Linear Regression Inference for Binomial Proportions Decision Analysis Summary and Perspective

Proceedings Article
11 Aug 1986
TL;DR: Bayes' formula can revise set estimates, often at little computational cost beyond that needed for point priors, and support rigorous inference on such everyday assertions as "one event is more likely than another" or that an event "usually" occurs.
Abstract: It is conventional to apply Bayes' formula only to point estimates of the prior probabilities. This convention is unnecessarily restrictive. The analyst may prefer to estimate that the priors belong to some set of probability vectors. Set estimates allow the non-paradoxical expression of ignorance and support rigorous inference on such everyday assertions as "one event is more likely than another" or that an event "usually" occurs. Bayes' formula can revise set estimates, often at little computational cost beyond that needed for point priors. Set estimates can also inform statistical decisions, although disagreement exists about what decision methods are best.

Journal ArticleDOI
TL;DR: An approach to causality assessment that is based on the logic of uncertainty, Bayesian probability theory is outlined, taken to be the calculation of the posterior odds in favor of drug causation, given all available background and case information.
Abstract: This paper outlines an approach to causality assessment that is based on the logic of uncertainty, Bayesian probability theory. The goal of causality assessment is taken to be the calculation of the posterior odds in favor of drug causation, given all available background and case information. There are two stages to the Bayesian approach: collecting the facts and evaluating the evidence. The evaluation proceeds by a series of probability assessments that decompose the overall causality assessment into a series of component evaluations, each of which focuses on one factor or source of information. The solutions to these component problems are then combined according to the rules of probability theory to give a solution to the overall causality assessment.

Book ChapterDOI
08 Aug 1986
TL;DR: A knowledge based system for ship classification that was originally developed using the PROSPECTOR updating method has been reimplemented to use the inference procedure developed by Pearl and Kim, and the comparative performance of the two versions of the system is discussed.
Abstract: One of the most important aspects of current expert systems technology is the ability to make causal inferences about the impact of new evidence. When the domain knowledge and problem knowledge are uncertain and incomplete, Bayesian reasoning has proven to be an effective way of forming such inferences [3,4,8]. While several reasoning schemes have been developed. based on Bayes Rule, there has been very little work examining the comparative effectiveness of these schemes in a real application. This paper describes a knowledge based system for ship classification [1], originally developed using the PROSPECTOR updating method [2], that has been reimplemented to use the inference procedure developed by Pearl and Kim [4,5|. We discuss our reasons for making this change, the implementation of the new inference engine, and the comparative performance of the two versions of the system.

Journal ArticleDOI
TL;DR: On decrit des methodes applicables aux fonctions de densite de probabilite unimodales univariables as discussed by the authors, a.k.a. univariable methods.
Abstract: On decrit des methodes applicables aux fonctions de densite de probabilite unimodales univariables

Book ChapterDOI
08 Aug 1986
TL;DR: The development of a causal Bayesian model for the diagnosis of appendicitis is described and why it is superior to alternative approaches to reasoning about uncertainty popular in the AI community is described.
Abstract: The causal Bayesian approach is based on the assumption that effects (e.g., symptoms) that are not conditionally independent with respect to some causal agent (e.g., a disease) are conditionally independent with respect to some intermediate state caused by the agent, (e.g., a pathological condition). This paper describes the development of a causal Bayesian model for the diagnosis of appendicitis. The paper begins with a description of the standard Bayesian approach to reasoning about uncertainty and the major critiques it faces. The paper then lays the theoretical groundwork for the causal extension of the Bayesian approach, and details specific improvements we have developed. The paper then goes on to describe our knowledge engineering and implementation and the results of a test of the system. The paper concludes with a discussion of how the causal Bayesian approach deals with the criticisms of the standard Bayesian model and why it is superior to alternative approaches to reasoning about uncertainty popular in the AI community.

Journal ArticleDOI
TL;DR: Learn-merge invariance is a property of prior distributions (related to postulates introduced by the philosophers W. E. Johnson and R. Carnap) which is defined and discussed within the Bayesian learning model as discussed by the authors.

Book ChapterDOI
08 Aug 1986
TL;DR: In this paper, a general problem of symbolic hierarchical Bayesian inference for military force inference is discussed, and a method for approximate hierarchical accrual that can be used to selectively avoid unnecessary conflict resolution is presented.
Abstract: Publisher Summary This chapter discusses Bayesian inference for radar imagery based surveillance. Inference is performed over a space of hierarchically linked hypotheses. The hypotheses represent statements of the form there is a military force of type F in deployment D at world location L at time T . The hierarchy in the hypothesis space corresponds to the hierarchy inherent in military doctrine of force structuring. Thus, array-level hypotheses of military units, such as companies, artillery batteries, and missile sites, are linked to their component unit hypotheses of vehicles batteries and missile launchers. Similarly, companies are grouped to form battalion hypotheses and battalions to form regiments. The structure of inference follows a pattern based on the models that are matched to generate hypotheses of the presence of military forces on the battlefield. The chapter discusses a general problem of symbolic hierarchical Bayesian inference for military force inference. It presents a method for approximate hierarchical accrual that can be used to selectively avoid unnecessary conflict resolution depending on the system's focus of attention in processing tasks.

Journal ArticleDOI
TL;DR: It appears from this Solution, that where the Number of Trials is very great, the Deviation must be inconsiderable.
Abstract: (1986). Is the Reference in Hartley (1749) to Bayesian Inference? The American Statistician: Vol. 40, No. 2, pp. 109-110.

Journal ArticleDOI
TL;DR: In this article, the problem of statistical inference on the parameters of the three parameter power function distribution based on a full unordered sample of observations or a type II censored ordered sample is considered.
Abstract: We consider the problem of statistical inference on the parameters of the three parameter power function distribution based on a full unordered sample of observations or a type II censored ordered sample of observations. The inference philosophy used is the theory of structural inference. We state inference procedures which yield inferential statements about the three unknown parameters. A numerical example is given to illustrate these procedures. It is seen that within the context of this example the inference procedures of this paper do not encounter certain difficulties associated with classical maximum likelihood based procedures. Indeed it has been our numerical experience that this behavior is typical within the context of that subclass of the three parameter power function distribution to which this example belongs.

Posted Content
TL;DR: In this article, a general equilibrium asset price model is proposed to describe an economy with actual output generated by a Markovian Latent Process of Technolgical Shocks, where agents make use of the entire observed history to make inference about the Latent Technological Shocks.
Abstract: The Paper Has Two Major Parts. the First Part Focuses on the Theoretical Properties of a General Equilibrium Asset Price Model Describing an Economy with Actual Output Stochastically Generated by a Markovian Latent Process of Technolgical Shocks. with a Concealed State Space Economy, Agents Make Use of the Entire Observed History to Make Inference About the Latent Technological Shock. Instead of Focussing on the Entire History of Output, Past Events Are Summarized by a Conditional Probability Distribution Defined on the Space of All Possible States of Technology. Bayesian Updating Reestablishes Markovian Recursive Dynamics, and Allows One to Exploit the Analytical Tools Introduced by Lucas (1978) in Solving for Equilibrium Asset Prices. the Second Part of the Paper Deals with the Econometric Implications of the Model. the Consumption and Portfolio Decisions Can Be Expressed As Time Invariant Functions Defined on the Transformed State Space, I.E. the Space of Conditional Probability Distributions on the State of Nature At Any Point in Time. This Does Not Necessarily Imply That the Co-Movements of Consumption, Portfolio Decisions, Output and Asset Prices Are Stationary. We Formulate a Gaussian Model, Very Similar to Hansen and Singleton (1983) and Estimate It Via a State Space Representation Which Incorporates the Rational Expectations Equilibrium Cross-Equation Restriction.

Journal ArticleDOI
TL;DR: In this article, the Dirichlet process prior for F and squared erro loss function is used to derive the probability that Z > X + Y, and the limiting Bayes estimator of λ(F) under some conditions on the parameter of the process is shown to be asymptotically normal.
Abstract: Let X, Y and Z be independent random variables with common unknown distribution F. Using the Dirichlet process prior for F and squared erro loss function, the Bayes and empirical Bayes estimators of the parameters λ(F). the probability that Z > X + Y, are derived. The limiting Bayes estimator of λ(F) under some conditions on the parameter of the process is shown to be asymptotically normal. The aysmptotic optimality of the empirical Bayes estimator of λ(F) is established. When X, Y and Z have support on the positive real line, these results are derived for randomly right censored data. This problem relates to testing whether than used discussed by Hollander and Proshcan (1972) and Chen, Hollander and Langberg (1983).


Journal ArticleDOI
TL;DR: In this article, the problem of minimizing, subject to uniform risk domination, the Bayes risk (or more generally the posterior expected loss) against certain conjugates or mixtures of conjugate priors is considered.
Abstract: Simultaneous estimation of p gamma scale-parameters is considered under squared-error loss. The problem of minimizing, subject to uniform risk domination, the Bayes risk (or more generally the posterior expected loss) against certain conjugate or mixtures of conjugate priors is considered. Rather surprisingly, it is shown that the minimization can be done conditionally, thus avoiding variational arguments. Relative savings loss (and a posterior version thereof) are found, and it is found that in the most favorable situations, Bayesian robustness can be achieved without sacrificing substantial subjective Bayesian gains.

Journal ArticleDOI
TL;DR: In this article, the sensitivity of a Bayesian inference to prior assumptions is examined by Monte Carlo simulation for the beta-binomial conjugate family of distributions, and the effect on the probability interval of the binomial parameter is found to be quite sensitive to misspecification of the prior distribution.
Abstract: The sensitivity of-a Bayesian inference to prior assumptions is examined by Monte Carlo simulation for the beta-binomial conjugate family of distributions Results for the effect on a Bayesian probability interval of the binomial parameter indicate that the Bayesian inference is for the most part quite sensitive to misspecification of the prior distribution The magnitude of the sensitivity depends primarily on the difference of assigned means and variances from the respective means and variances of the actually-sampled prior distributions The effect of a disparity in form between the assigned prior and actually-sampled distributions was less important for the cases tested

ReportDOI
01 Aug 1986
TL;DR: A computationally feasible scheme to compute the probability of an edge at a point in an image that is convolve with a linear blurring function and also with uncorrelated Gaussian additive noise is given.
Abstract: : A technique is presented for determining the probability of an edge at a point in an image that is convolve with a linear blurring function and also with uncorrelated Gaussian additive noise The ideal image is modeled by a set of templates for local neighborhoods Every neighborhood in the ideal image is assumed to fit one of the templates with height probability A computationally feasible scheme to compute the probability of edges is given The output of several of the likelihood generators based on this model can be combined to form a more robust likelihood generator Keywords: Edge detection; Template; Likelihood; Bayesian reasoning



Book ChapterDOI
01 Jan 1986
TL;DR: An efficient algorithm for time series decomposition is presented following Akaike (1980), who uses a linear bayesian model to take advantage of the model structure to reduce the computational effort.
Abstract: The aim of this work is to present an efficient algorithm for time series decomposition. Following Akaike (1980), who uses a linear bayesian model, it is possible to take advantage of the model structure to reduce the computational effort.

ReportDOI
01 Sep 1986
TL;DR: This paper describes selection and ranking procedures using Bayesian or Empirical Bayes approaches, and sets up a general formation of empirical Bayes framework for selection problems.
Abstract: : This paper describes selection and ranking procedures using Bayesian or Empirical Bayes approaches. Section 2 of this paper deals with the problem of selecting the best population or selecting a subset containing the best population through Bayesian approach. An essentially complete class is obtained for a class of reasonable loss functions. A control condition, called P* - condition, is used to filter out poor procedures. Section 3 set up a general formation of empirical Bayes framework for selection problems. Several empirical Bayes frameworks are discussed based on the underlying statistical models. Two selection problems dealing with binomial and uniform distributions are discussed in detail.