scispace - formally typeset
Search or ask a question

Showing papers on "Bayesian inference published in 1980"


Journal ArticleDOI
01 Jul 1980
TL;DR: Predictive checking functions for transformation, serial correlation, bad values, and their relation with Bayesian options are considered, and robustness is seen from a Bayesian viewpoint and examples are given.
Abstract: : Scientific learning is an iterative process employing Criticism and Estimation. Correspondingly the formulated model factors into two complimentary parts - a predictive part allowing model criticism, and a Bayes posterior part allowing estimation. Implications for significance tests, the theory of precise measurement, and for ridge estimates are considered. Predictive checking functions for transformation, serial correlation, bad values, and their relation with Bayesian options are considered. Robustness is seen from a Bayesian viewpoint and examples are given. For the bad value problem a comparison with M estimators is made. (Author)

665 citations


Journal ArticleDOI
TL;DR: In this paper, the basic ideas underlying the construction of a newly introduced seasonal adjustment procedure by a Bayesian modeling are discussed in detail, with particular emphasis on the use of the concept of the likelihood of a bayesian model for model selection.
Abstract: The basic ideas underlying the construction of a newly introduced seasonal adjustment procedure by a Bayesian modeling are discussed in detail. Particular emphasis is placed on the use of the concept of the likelihood of a Bayesian model for model selection. The performance of the procedure is illustrated by a numerical example.

149 citations



Journal ArticleDOI
TL;DR: In this paper, the procedure of maximizing the missing information is applied to derive reference posterior probabilities for null hypotheses, and the results shed further light on Lindley's paradox and suggest that a Bayesian interpretation of classical hypothesis testing is possible by providing a one-to-one approximate relationship between significance levels and posterior probabilities.
Abstract: The procedure of maximizing the missing information is applied to derive reference posterior probabilities for null hypotheses. The results shed further light on Lindley’s paradox and suggest that a Bayesian interpretation of classical hypothesis testing is possible by providing a one-to-one approximate relationship between significance levels and posterior probabilities.

66 citations


Journal ArticleDOI
TL;DR: The computer assisted search planning (CASP) system developed for the United States Coast Guard as mentioned in this paper is based on Monte Carlo simulation to obtain an initial probability distribution for target location and to update this distribution to account for drift due to currents and winds.
Abstract: This paper provides an overview of the Computer-Assisted Search Planning (CASP) system developed for the United States Coast Guard. The CASP information processing methodology is based upon Monte Carlo simulation to obtain an initial probability distribution for target location and to update this distribution to account for drift due to currents and winds. A multiple scenario approach is employed to generate the initial probability distribution. Bayesian updating is used to reflect negative information obtained from unsuccessful search. The principal output of the CASP system is a sequence of probability “maps” which display the current target location probability distributions throughout the time period of interest. CASP also provides guidance for allocating search effort based upon optimal search theory.

59 citations


01 May 1980
TL;DR: Several topics that arise in applying Bayesian ideas to inference problems are discussed: the relationship between the probability specification and real-world experiences is explored, and a suggestion is made that zero probabilities are, in a sense, unreasonable.
Abstract: : This paper discusses several topics that arise in applying Bayesian ideas to inference problems. The Bayesian paradigm is first described as an appreciation of the world through probability: probability being expressed in terms of gambles. Various justifications for this view are outlined. The role of models in the specification of probabilities is considered; together with related problems of the size and complexity of the model, robustness and goodness of fit. Some attempt is made to clarify the concept of conditioning in probability statements. The role of the second argument in a probability function is emphasized again in discussion of the likelihood principle. The relationship between the probability specification and real-world experiences is explored and a suggestion is made that zero probabilities are, in a sense, unreasonable. It is pointed out that it is unrealistic to think of probability as necessarily being defined over a sigma-field. The paper concludes with some remarks on two common objections to the Bayesian view. (Author)

37 citations


Journal ArticleDOI
TL;DR: The definition is applied to some Gaussian models and a method of handling the improper uniform prior distributions is obtained for the Bayesian modeling of a multi-model situation where the submodels may have different numbers of parameters.
Abstract: The predictive likelihood of a model specified by data is defined when the model satisfies certain conditions. It reduces to the conventional definition when the model is specified independently of the data. The definition is applied to some Gaussian models and a method of handling the improper uniform prior distributions is obtained for the Bayesian modeling of a multi-model situation where the submodels may have different numbers of parameters. The practical utility of the method is checked by a Monte Carlo experiment of some quasi-Bayesian procedures realized by using the predictive likelihoods.

27 citations


Journal ArticleDOI
TL;DR: The sensitivity of Bayesian pattern recognition models to multiplicative deviations in the prior and conditional probabilities is investigated for the two-class case and results indicate that Bayesian systems which are based on limited data or subjective probabilities are expected to have a high percentage of correct classification despite the fact that the priorand conditional probabilities they use may deviate rather significantly from the true values.
Abstract: The sensitivity of Bayesian pattern recognition models to multiplicative deviations in the prior and conditional probabilities is investigated for the two-class case. Explicit formulas are obtained for the factor K by which the computed posterior probabilities should be divided in order to eliminate the deviation effect. Numerical results for the case of binary features indicate that the Bayesian model tolerates large deviations in the prior and conditional probabilities. In fact, the a priori ratio and the likelihood ratio may deviate within a range of 65-135 percent and still produce posterior probabilities in accurate proximity of at most ±0.10. The main implication is that Bayesian systems which are based on limited data or subjective probabilities are expected to have a high percentage of correct classification despite the fact that the prior and conditional probabilities they use may deviate rather significantly from the true values.

25 citations


Book ChapterDOI
01 Jan 1980
TL;DR: Fisher's critique of Bayesian inference can be traced back to an acerbic and amusing exchange between Fisher and Jeffreys which took place in the 1930s and revealed which parts of the Bayesian theory Fisher rejected and which parts he simply failed to comprehend as discussed by the authors.
Abstract: Bayesian inference was Fisher’s intellectual bete noire. His work is filled with sharp attacks on “inverse probability”,1 and for many years he sought to develop a logic of induction alternative to the Bayesian scheme. In this connection he developed some of his central ideas, including the likelihood-based theory of estimation and the theory of fiducial probability. This lecture is about Fisher’s critique of Bayesian inference. It centers on an acerbic and amusing exchange between Fisher and Jeffreys which took place in the 1930s and revealed which parts of the Bayesian theory Fisher rejected, and which parts he simply failed to comprehend. First, I shall briefly summarize two papers on inverse probability which antedate this exchange, and at the end of the lecture I shall mention a few later changes in point of view which brought Fisher more into line with the ideas argued by Jeffreys in their exchange — but which Fisher had rejected out of hand at the time.

25 citations


Journal ArticleDOI
TL;DR: This paper examines some classical procedures in order to see when they can be given a Bayesian justification.
Abstract: The elimination of nuisance parameters has classically been tackled by variousad hoc devices, and has led to a number of attempts to define partial sufficiency and ancillarity. The Bayesian approach is clearly defined. This paper examines some classical procedures in order to see when they can be given a Bayesian justification.

21 citations


Journal ArticleDOI
TL;DR: It is shown that pivotal inference embraces both Bayesian and frequentist reasoning, and the theory of pivotal inference applies when parameters are defined by reference to their effect on observations rather than their impact on distributions.
Abstract: The theory of pivotal inference applies when parameters are defined by reference to their effect on observations rather than their effect on distributions. It is shown that pivotal inference embraces both Bayesian and frequentist reasoning

Book ChapterDOI
01 Jan 1980
TL;DR: In this paper, the authors discuss the asymptotic theory of Bayes solutions in estimation and testing when the observations are from a discrete parameter stochastic process and present sufficient conditions for the strong consistency of the Bayes estimators for general loss functions and discuss some recent results for loss functions of quadratic nature.
Abstract: This chapter discusses the asymptotic theory of Bayes solutions in estimation and testing when the observations are from a discrete parameter stochastic process. It presents the fundamental theorem in the asymptotic theory of Bayesian inference, namely, the approach of the posterior density to the normal for discrete parameter stochastic processes. The chapter explains the asymptotic behavior of Bayes estimators for discrete-time stochastic processes. It presents sufficient conditions for the strong consistency of Bayes estimators for general loss functions and discusses some recent results for loss functions of quadratic nature for a sequence of statistical problems that have the Bayesian form. The chapter discusses Bayesian testing and a limit theorem for the n th root of posterior risk. It reviews the behavior of the Bayes posterior risk in testing disjoint hypotheses or hypotheses that are separated by an indifference region in which the losses caused by taking the wrong decision are zero.

Journal ArticleDOI
TL;DR: Simulation experiments indicate significantly improved forecasting performance over a commonly used rescaling type of approach in the area of automatically forecasting the daily electricity demand cycle.

Book ChapterDOI
01 Jan 1980
TL;DR: In this article, the authors focus on the applications of certain inequalities in the areas of estimation, hypothesis testing, simultaneous comparisons, ranking and selection, and reliability theory, and demonstrate the main point that the inequalities, although developed in a theoretical way, are important and powerful tools for solving many real-life problems in statistics.
Abstract: In Chapters 2–7 we studied probability inequalities in multivariate distributions from a theoretical point of view and did not describe their applications. The purpose of this chapter is to concentrate on the applications of certain inequalities in the areas of estimation, hypothesis testing, simultaneous comparisons, ranking and selection, and reliability theory. (There are a number of other areas in statistics (e.g., Bayesian inference and nonparametric inference) in which such inequalities are also useful, but these will not be discussed here.) Most applications described in this chapter have already appeared in the literature, and it is likely that many new applications will be made available in the future through a wider use of the theory. It should be emphasized here that we do not try to include all possible applications in this chapter. Instead we hope to demonstrate the main point that the inequalities, although developed in a theoretical way, are important and powerful tools for solving many real-life problems in statistics.


Journal ArticleDOI
TL;DR: An approximation method for finding the probability density function of system reliability is described to provide an alternative to the sometimes intractable exact method.

Book ChapterDOI
TL;DR: In this article, the authors focus on the various models of the analysis of variance from the Bayesian inference point of view and, in particular, consider the multivariate Analysis of variance (MANOVA).
Abstract: Publisher Summary This chapter focuses on the various models of the analysis of variance from the Bayesian inference point of view and, in particular, considers the multivariate analysis of variance (MANOVA). The assumptions of the model are discussed in the chapter. To affect inferences about the unknown parameters of the model, it is necessary to make some assumptions. Violations of these assumptions lead to new models and modifications that must be made to the basic model. The unknown parameters in the model are (B, Σ). They are estimated by combining subjective (prior) information with the observed data via the Bayes theorem. The entire MANOVA analysis yields changes in subjective beliefs, predictions, or decisions all based upon the assessed prior of a given individual. There are two classes of priors: non-informative and informative. The non-informative prior provides sort of a benchmark, or point of departure, for Bayesian inferences. That is, inferences based upon the non-informative prior are those dictated in the absence of prior information and the inferences based upon the informative prior are those obtained when substantive subjective prior beliefs are combined with observation. This non-informative prior distribution can be shown to correspond to minimal information in a Shannon information sense. Informative prior represent the prior distribution of (B,Σ) by a parametric family of distributions whose members are indexed by certain fixed, but as yet undetermined, parameters. The hyperparameters are then assessed for the decision maker on the basis of his/her specific prior information.



01 May 1980
TL;DR: A parallel relaxation algorithm for updating initial heuristic estimates of the likelihood of edges so that continuous boundaries are formed and is an algorithm that performs in an effective manner on several very complex images.
Abstract: : Many image analysis tasks require the construction of a boundary representation as a means of partitioning an image. This paper develops a parallel relaxation algorithm for updating initial heuristic estimates of the likelihood of edges so that continuous boundaries are formed. Bayesian probability theory is used to analyze the probability updating of a single edge based upon the joint probabilities of the edges in its local surrounding context. The relationships between edges, sometimes referred to as compatibility coefficients in relaxation algorithms, are embodied as conditional probabilities between the central edge and the context of edges. The set of conditional probabilities are theoretically derived from a model of desired line drawings that satisfy basic notions of boundary continuity. The local updating function attempts to drive the likelihood of each central edge into consistency with the surrounding context. Experiments involving the iterative parallel application of this non-linear Bayesian updating function to all edge probabilities demonstrates serious problems in the formulation. A variety of heuristic modifications, guided by theoretical considerations, are examined empirically. The final formulation is an algorithm that performs in an effective manner on several very complex images. (Author)

Journal ArticleDOI
TL;DR: This article considered the problem of making statistical inferences about group judgements and group decisions using Qualitative Controlled Feedback, from the Bayesian point of view, and developed a model for responses of the panel on each stage.
Abstract: This paper considers the problem of making statistical inferences about group judgements and group decisions using Qualitative Controlled Feedback, from the Bayesian point of view. The qualitative controlled feedback procedure was first introduced by Press (1978), for a single question of interest. The procedure in first reviewed here including the extension of the model to the multiple question case. We develop a model for responses of the panel on each stage. Many questions are treated simultaneously and an autoregressive model is developed for explaining the responses of the group members as a function of the feedback. The errors are assumed to follow a matrix intraclass covariance structure. Marginal and conditional posterior distributions of the regression coefficient vector are found in both small and large samples. The broadly defined generic family of multidimensional Student-t distributions is found to play a major role in the results

Journal ArticleDOI
TL;DR: This work states that seasonal analysis of economic time series is used to illuestrate ways of approaching dificulties in the task of assessing posterior distributions from noisy empirical data.
Abstract: The task of assessing posterior distributions from noisy empirical data imposes difficult requirements of modelling, computing, and assessing sensitivity to model choice. Seasonal analysis of economic time series is used to illuestrate ways of approaching such dificulties.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the use of predictive distributions for model criticism and also the implications for significance tests and the theory of precise measurement in scientific learning, which is seen as an iterative process employing Criticism and Estimation.
Abstract: Scientific learning is seen as an iterative process employing Criticism and Estimation. Sampling theory use of predictive distributions for model criticism is examined and also the implications for significance tests and the theory of precise measurement. Normal theory examples and ridge estimates are considered. Predictive checking functions for transformation, serial correlation, and bad values are reviewed as is their relation with Bayesian options. Robustness is seen from a Bayesian view point and examples are given. The bad value problem is also considered and comparison with M estimators is made

Journal ArticleDOI
TL;DR: In this article, the inference problem of comparing Poisson parameters of k treatment groups with that of a control group is considered from a Bayesian viewpoint assuming independent conjucate prior distributions on.
Abstract: The inference problem of comparing Poisson parameters of k treatment groups with that of a control. is considered from a Bayesian viewpoint assuming independent conjucate prior distributions on . The joint posterior distribution of , is obtained. For studying a hypothesis concerning , a function of having an asymptotic chi-square distribution is derived. The results are useful in biological applications when a scientist is interested in the comparison of k sets of Poisson counts for the treated groups with those of a control group.