scispace - formally typeset
Search or ask a question

Showing papers on "Bayesian inference published in 1974"




Book
31 Jul 1974
TL;DR: The Probability Framework is used as a guide to decision theory, and Bayesian Inference is used for interpreting Interpretations of Probability.
Abstract: I / The Probability Framework.- II / Classical Statistical Theory.- III / R. A. Fisher: Likelihood and Fiducial Inference.- IV / Decision Theory.- V / Subjective and Logical Approaches.- VI / Comparison of Approaches.- VII / The Language: Syntax.- VIII / Rational Corpora.- IX / Randomness.- X / Probability.- XI / Conditional Probability.- XII / Interpretations of Probability.- XIII / Bayesian Inference.- XIV / The Fiducial Argument.- XV / Confidence Methods.- XVI / Epistemological Considerations.- Appendix / The Mathematical Background.

343 citations


Journal ArticleDOI
TL;DR: This chapter discusses Bayesian Assessment of Assumptions, which investigates the effect of non-Normality on Inferences about a Population Mean with Generalizations in the context of a Bayesian inference model.
Abstract: Nature of Bayesian Inference Standard Normal Theory Inference Problems Bayesian Assessment of Assumptions: Effect of Non-Normality on Inferences About a Population Mean with Generalizations Bayesian Assessment of Assumptions: Comparison of Variances Random Effect Models Analysis of Cross Classification Designs Inference About Means with Information from More than One Source: One-Way Classification and Block Designs Some Aspects of Multivariate Analysis Estimation of Common Regression Coefficients Transformation of Data Tables References Indexes.

299 citations


Journal ArticleDOI
TL;DR: The present paper develops a structure in which the expert resolution problem may be logically formulated and conceptually solved and a framework is developed which enables a decision maker to encode his state of information concerning an expert.
Abstract: This paper is the first in a series of articles introducing a new conceptual and methodological framework for the use of experts in decision situations. Presented is the first theory of expert resolution wholly consistent with the Bayesian or subjectivist view of probability. The approach taken rests philosophically on the foundations of decision analysis. The results form practical tools for solving expert resolution problems. The present paper develops a structure in which the expert resolution problem may be logically formulated and conceptually solved. A framework is developed which enables a decision maker to encode his state of information concerning an expert. Application of the tools of Bayesian inference provides a mechanism by which a decision maker can incorporate an expert's opinion into his own. The more complicated case in which a decision maker is confronted with the diverse judgments of more than one expert is also addressed in detail. Additionally, the problem of determining the economic ...

280 citations



Journal ArticleDOI
TL;DR: In this article, the authors develop models that describe the process by which rational expectations may be developed within a market and introduce the concept of Bayesian learning, where consistent and rational expectations are introduced in models where the firms cannot immediately move to equilibrium.
Abstract: The approach in this paper is the development of models that describe the process by which rational expectations may be developed within a market. The concept of Bayesian learning is introduced. Consistent and rational expectations are introduced in models where the firms cannot immediately move to equilibrium. Three different models are developed which demonstrate the interaction of Bayesian learning and expectations in achieving a market equilibrium. These models are dynamic and describe the transition process toward equilibrium. Two of the models involve unknown parameters about which the firms learn. The third is a control theory model explicitly involving adjustment costs.

177 citations


Journal ArticleDOI
TL;DR: In this article, the authors trace the history of both likelihood and the method of maximum likelihood; it is essential to keep the distinction between the two clearly in mind, as the likelihood is the vehicle which carries the observational or experimental results in Bayes' Theorem, whilst in the latter, likelihood was one of the key concepts in the original formulation by Neyman and Pearson (1928).
Abstract: One of R. A. Fisher's most influential papers was "On the mathematical foundations of theoretical statistics" (1922), in which he propounded the method of maximum likelihood as a means of point estimation, and hence established a whole branch of statistical reasoning. Yet this paper does not contain his original statement of the method, which was published in 1912, nor does it contain his original definition of likelihood, which appeared in 1921. The great innovation of the 1922 paper was, rather, the clear specification of one approach to the problem of estimation, and the elucidation of the properties of maximum-likelihood estimators. Methods similar to the method of maximum likelihood have a history prior to the work of Fisher, but the definition of likelihood itself, and the realization that it could be used independently as a measure of relative support for hypotheses, appears to be entirely his own, and the main purpose of the present paper is to investigate the background to his introduction of the new concept. With the decline in the esteem with which repeated-sampling methods are held, in the face of the Bayesian revival, the concept of likelihood has come to the fore as offering an approach to statistical inference that is neither repeated-sampling nor Bayesian (see Edwards, 1972). Although it is too early to predict the extent to which pure likelihood methods can supersede other approaches, the concept of likelihood is, in addition, so fundamental to both Bayesian inference and Neyman-Pearson methods that an account of its origins needs no further justification. In the former, the likelihood is the vehicle which carries the observational or experimental results in Bayes' Theorem, whilst in the latter, likelihood was one of the key concepts in the original formulation by Neyman and Pearson (1928). The present paper traces the history of both likelihood and the method of maximum likelihood; it is essential to keep the distinction between the two clearly in mind.

70 citations


Journal ArticleDOI
TL;DR: The relationship between F and its own tail area sheds further light on the relationship between Bayesian and "Fisherian" significance as mentioned in this paper, and F too can be treated as a non-Bayesian criterion and is almost equivalent to G.
Abstract: Compromises between Bayesian and non-Bayesian significance testing are exemplified by examining distributions of criteria for multinominal equiprobability. They include Pearson's X2, the likelihood-ratio, the Bayes factor F, and a statistic G that previously arose from a Bayesian model by “Type II Maximum Likelihood.” Its asymptotic distribution, implied by the theory of the “Type II Likelihood Ratio,” is remarkably accurate into the extreme tail. F too can be treated as a non-Bayesian criterion and is almost equivalent to G. The relationship between F and its own tail area sheds further light on the relationship between Bayesian and “Fisherian” significance.

55 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that in the case of quantum mechanical systems associated with finite-dimensional Hilbert spaces, the proposition is completely determined by the premises of the quantum theory itself.
Abstract: Many physicists take it for granted that their theories can be either refuted or verified by comparison with experimental data. In order to evaluate such data, however, one must employ statistical estimation and inference methods which, unfortunately, always involve an ad hoc proposition. The nature of the latter depends upon the statistical method adopted; in the Bayesian approach, for example, one must usesome Lebesgue measure in the “set of all possible distributions.” The ad hoc proposition has usually nothing in common with the physical theory in question, thus subjecting its verification (or refutation) to further doubt. This paper points out one notable exception to this rule. It turns out that in the case of the quantum mechanical systems associated with finite-dimensional Hilbert spaces the proposition is completely determined by the premises of the quantum theory itself.

54 citations






01 Jan 1974
TL;DR: The paper describes the development and evaluation of a class of multistage Bayesian inference models which provide a potentially meaningful and useful framework for the analysis of current modes of intelligence processing.
Abstract: : The report discusses inference models which provide a potentially meaningful and useful framework for the analysis of current modes of intelligence processing The paper describes the development and evaluation of a class of multistage Bayesian inference models


Journal ArticleDOI
TL;DR: In this paper, the authors introduce a model of inference for risk processes based on the Bayesian viewpoint and treat the concepts of exchangeability and partial exchangeability as essential. But this model is restricted to the case where all the variables of a sequence are independent conditionally on any set (finite or not) of exhaustive and exclusive hypotheses.
Abstract: Our purpose is to introduce some models of inference for risk processes. The bayesian viewpoint is adopted and for our treatment the concepts of exchangeability and partial exchangeability (due to B. de Finetti, [6], [7]) are essential. We recall the definitions: The random variables of a sequence ( X 1 , X 2 …) are exchangeable if, for every n , the joint distribution of n r.v. of the sequence is always the same, whatever the n r.v. are and however they are permuted. From a structural point of view an exchangeable process X 1 , X 2 … can be intended as a sequence of r.v. equally distributed among which a “stochastic dependence due to uncertainty” exists. More precisely the X i are independent conditionally on any of a given set (finite or not) of exhaustive and exclusive hypothesis. These hypotheses may concern, for instance, the values of a parameter (number or vector) on which the common distribution, of known functional form, of X i depends. We shall restrict ourselves to this case. Therefore, we shall assume that, conditionally on each possible value θ of a parameter Θ, the X i are independent with F ( x /θ) as known distribution function. According to the bayesian approach, a probability distribution on Θ must be assigned.

01 May 1974
TL;DR: The subjective probability distribution of a random event is revealed by the subject's choice between bets as mentioned in this paper, which is a view expressed by F. Ramsey, B. De Finetti, L.J. Savage and traceable to E. Borel and T. Bayes.
Abstract: : By definition, the subjective probability distribution of a random event is revealed by the ('rational') subject's choice between bets -- a view expressed by F. Ramsey, B. De Finetti, L.J. Savage and traceable to E. Borel and, it can be argued, to T. Bayes. Since hypotheses are not observable events, no bet can be made, and paid off, on a hypothesis. The subjective probability distribution of hypotheses (or of a parameter, as in the current 'Bayesian' statistical literature) is therefore a figure of speech, an 'as if,' justifiable in the limit. Given a long sequence of previous observations, the subjective posterior probabilities of events still to be observed are derived by using a mathematical expression that would approximate the subjective probability distribution of hypotheses, if these could be bet on. This position was taken by most, but not all, respondents to a 'Round Robin' initiated by J. Marschak after M.H. DeGroot's talk on Stopping Rules.


Journal ArticleDOI
TL;DR: A comparison with sequential diagnosis shows that the two procedures are different, although theyare related, and that the optimality of subsets is sensitive to departures from their composition, and an optimal size of optimized subsets of symptoms was observed.
Abstract: Optimization for diagnostic recognition rate was performed for subsets of symptoms of various sizes. The diagnostic problem was the recognition and identification of thyroid diseases. Unbiased evaluation of performance was obtained and the extent of the bias in other evaluation methods was determined. Interdependence of symptoms was shown to be a negligible nuisance in the application of Bayesian inference to the present data. An optimal size of optimized subsets of symptoms was observed. A comparison with sequential diagnosis shows that the two procedures are different, although theyare related, and that the optimality of subsets is sensitive to departures from their composition.


Journal ArticleDOI
TL;DR: In this article, a comparison is made of tests of composite statistical hypotheses using the optimal C(a) tests developed in Neyman [4], Buhler and Puri [2] and Bartoo and puri [7] and the Wald statistic based on maximum likelihood estimators (Wald, [6]).
Abstract: This thesis presents two main bodies of work. First a comparison is made of tests of composite statistical hypotheses using the optimal C(a) tests developed in Neyman [4], Buhler and Puri [2] and Bartoo and Puri [7] and the Wald statistic based on maximum likelihood estimators (Wald, [6]). These comparisons are carried out in the case where the parameter under test is interior to open sets in parameter space. It is also shown that the two test procedures are asymptotically equivalent in this case. In particular the problem of constructing tests associated with a mixture of two normal components with one component known is treated in detail. The problem arises out of studies of Down's Syndrome considered in Penrose and Smith [5] and Moran [3],

01 Jun 1974
TL;DR: The use of large sample theory in comparing families of test statistics, tracing the use of increasingly powerful mathematics and the establishment of the asymptotic optimality of likelihood ratio tests during the past generation, is discussed in this article.
Abstract: : The authors explain the use of large sample theory in comparing families of test statistics, tracing the use of increasingly powerful mathematics and the establishment of the asymptotic optimality of likelihood ratio tests during the past generation. Robbins' empirical Bayes model is then presented and discussed in comparison with structural parameter models as well as standard Bayesian models. (Author)

Book ChapterDOI
01 Jan 1974
TL;DR: As Fisher clearly saw, testing a scientific hypothesis is another matter altogether: “...such processes have a logical basis very different from those of a scientist engaged in gaining from his observations an improved understanding of reality.”
Abstract: R. A. Fisher, although he too claimed to adopt an empirical conception of probability, saw clearly some of the problems alluded to in the last chapter. According to him, “The Theory of Testing Hypotheses’ [the ironical quotes are Fisher’s] was a later attempt, ... to reinterpret [tests of significance] in terms of an imagined process of acceptance sampling....”1 In designing a test for accepting batches of manufactured items, it is indeed the long run properties of the test that are of interest; if we are willing to reject acceptable samples 5% of the time, it may well be that the only way in which we can achieve just that level of false rejection is to employ a randomized test, and it may make good sense to want to achieve exactly that level of false rejection. It is, after all, the indefinitely long run of repetitions of the test that concerns us. As Fisher clearly saw, testing a scientific hypothesis is another matter altogether: “...such processes have a logical basis very different from those of a scientist engaged in gaining from his observations an improved understanding of reality.”2

Journal ArticleDOI
TL;DR: The representativeness heuristic, which leads to the expectation that Ss’ predictions would often run counter to the evidence, was supported.
Abstract: Ss in an inference task were given sequences of data in a symmetric, binary bookbag-and-poker-chip task They responded not only with subjective probability estimates, but also which hypothesis they considered favored Given the same sequences in a prediction task, the same Ss made predictions of the next-to-be-observed datum For the latter task, differential outcomes are expected under the normative Bayesian model and the representativeness heuristic The representativeness heuristic, which leads to the expectation that Ss’ predictions would often run counter to the evidence, was supported

Journal ArticleDOI
TL;DR: The analysis of public policy issues is often a frustrating business as discussed by the authors, as the analyst is apt to find that people in decision-making positions have neither the time nor the training to read it carefully.
Abstract: The analysis of public policy issues is often a frustrating business. After preparing a careful study of the likely effects of a proposed policy, the analyst is apt to find that people in decision-making positions have neither the time nor the training to read it carefully. Decision makers may know who did the study and what the major results were, but they are unlikely to understand how (or how well) it was done. Nor are they likely to believe the results of any single study, no matter how carefully it was done. In one recent case-the Federal Communications Commission's cable television rule making-the decision makers appeared to dismiss in a few sentences all of the analyses they had received: