scispace - formally typeset
Search or ask a question

Showing papers on "Bayesian probability published in 1983"


Journal ArticleDOI
TL;DR: The new method proposed here is based on Bayesian statistical inference and has several advantages over earlier approaches, including complete flexibility in the degree of belief placed on the prior estimate of the trip matrix and also allows for different degrees of belief in diffeent parts of thePrior estimate.
Abstract: Previous methods for estimating a trip matrix from traffic volume counts have used the principles of maximum entropy and minimum information. These techniques implicitly give as little weight to prior information on the trip matrix as possible. The new method proposed here is based on Bayesian statistical inference and has several advantages over these earlier approaches. It allows complete flexibility in the degree of belief placed on the prior estimate of the trip matrix and also allows for different degrees of belief in diffeent parts of the prior estimate. Furthermore under certain assumptions the method reduces to a simple updating scheme in which observations on the link flows successively modify the trip matrix. At the end of the scheme confidence intervals are available for the estimates of the trip matrix elements.

381 citations


Journal ArticleDOI
TL;DR: The present article shows that the previous normative analysis of solutions to problems such as the cab problem was incomplete, and that problems of this type require both a signal detection theory and a judgment theory for their proper Bayesian analysis.
Abstract: Several investigators concluded that humans neglect base rate information when asked to solve Bayesian problems intuitively. This conclusion is based on a comparison between normative (calculated) and subjective (responses by naive judges) solutions to problems such as the cab problem. The present article shows that the previous normative analysis was incomplete. In particular, problems of this type require both a signal detection theory and a judgment theory for their proper Bayesian analysis. In Bayes' theorem, posterior odds equals prior odds times the likelihood ratio. Previous solutions have assumed that the likelihood ratio is independent of the base rate, whereas signal detection theory (backed up by data) implies that this ratio depends on base rate. Before the responses of humans are compared with a normative analysis, it seems desirable to be sure that the normative analysis is accurate.

266 citations


Journal ArticleDOI
TL;DR: A class of Bayesian statistical methods for interspecies extrapolation of dose-response functions, using a system of hierarchical prior distributions similar to that of Lindley and Smith (1972), is proposed for the estimation of human lung cancer risk from various environmental emissions.
Abstract: We propose a class of Bayesian statistical methods for interspecies extrapolation of dose-response functions. The methods distinguish formally between the conventional sampling error within each dose-response experiment and a novel error of uncertain relevance between experiments. Through a system of hierarchical prior distributions similar to that of Lindley and Smith (1972), the dose-response data from many substances and species are used to estimate the interexperimental error. The data, the estimated error of interspecies extrapolation, and prior biological information on the relations between species or between substances each contribute to the posterior densities of human dose-response. We apply our methods to an illustrative problem in the estimation of human lung cancer risk from various environmental emissions.

200 citations


Book
10 May 1983
TL;DR: The Enterprise of Knowledge as discussed by the authors is a major conceptual and speculative philosophic investigation of knowledge, belief, and decision, which offers a distinctive approach to the improvement of knowledge where knowledge is construed as a resource for deliberation and inquiry.
Abstract: This book presents a major conceptual and speculative philosophic investigation of knowledge, belief, and decision. It offers a distinctive approach to the improvement of knowledge where knowledge is construed as a resource for deliberation and inquiry.The first three chapters of the book address the question of the revision of knowledge from a highly original point of view, one that takes issue with the fallibilist doctrines of Peirce and Popper, and with the views of Dewey, Quine, and Kuhn as well.The next ten chapters are more technical in nature but require relatively little background in mathematical technique. Among the topics discussed are inductive logic and inductive probability, utility theory, rational decision making, value conflict, chance (statistical probability), direct inference, and inverse inference.Chapters 14-17 review alternative approaches to the topic of inverse statistical inference. Much of the discussion focuses on contrasting Bayesian and anti-Bayesian reactions to R. A. Fisher's fiducial argument. This section of the book concludes with a discussion of the Neyman-Pearson-Wald approach to the foundations of statistical inference.The final chapter returns to the epistemological themes with which the book opened, emphasizing the question of the objectivity of human inquiry. An appendix provides a real-world application of Levi's theories of knowledge and probability, offering a critique of some of the methodological procedures employed in the Rasmussen Report to assess risks of major accidents in nuclear power plants. There are also references and an index."The Enterprise of Knowledge" will interest professionals and students in epistemology, philosophy of science, decision theory, probability theory, and statistical inference.

188 citations


Journal ArticleDOI
TL;DR: In addition to the standard results, the Bayesian approach gives a different method of determining the order of the ARMA model, that is (p, q).

139 citations


Journal ArticleDOI
TL;DR: Classic analysis is most misleading when the hypothesis in question is already unlikely to be true, when the baseline event rate is low, or when the observed differences are small.
Abstract: Conventional interpretation of clinical trials relies heavily on the classic p value. The p value, however, represents only a false-positive rate, and does not tell the probability that the investigator's hypothesis is correct, given his observations. This more relevant posterior probability can be quantified by an extension of Bayes' theorem to the analysis of statistical tests, in a manner similar to that already widely used for diagnostic tests. Reanalysis of several published clinical trials according to Bayes' theorem shows several important limitations of classic statistical analysis. Classic analysis is most misleading when the hypothesis in question is already unlikely to be true, when the baseline event rate is low, or when the observed differences are small. In such cases, false-positive and false-negative conclusions occur frequently, even when the study is large, when interpretation is based solely on the p value. These errors can be minimized if revised policies for analysis and reporting of clinical trials are adopted that overcome the known limitations of classic statistical theory with applicable bayesian conventions.

139 citations


Posted Content
TL;DR: This paper is an introduction to the analysis of games with incomplete information, using a Bayesian model, and the concept of virtual utility is developed as a tool for characterizing efficient incentive-compatible coordination mechanisms.
Abstract: This paper is an introduction to the analysis of games with incomplete information, using a Bayesian model. the logical foundations of the Bayesian model are discussed. To describe rational behavior of players in a Bayesian game, two basic solution concerts are present: Bayesian equilibrium, for games in which the players cannot communicate; and Bayesian incentive-compatibility, for games in which the players can communicate. The concept of virtual utility is developed as a tool for characterizing efficient incentive-compatible coordination mechanisms.

137 citations


Posted Content
TL;DR: A forecasting procedure based on a Bayesian method for estimating vector autoregressions is applied to ten macroeconomic variables and is shown to improve out-of-sample forecasts relative to univariate equations.
Abstract: This paper develops a forecasting procedure based on a Bayesian method for estimating vector autoregressions. The procedure is applied to ten macroeconomic variables and is shown to improve out-of-sample forecasts relative to univariate equations. Although cross-variables responses are damped by the prior, considerable interaction among the variables is shown to be captured by the estimates.We provide unconditional forecasts as of 1982:12 and 1983:3.We also describe how a model such as this can be used to make conditional projections and to analyze policy alternatives. As an example, we analyze a Congressional Budget Office forecast made in 1982:12.While no automatic causal interpretations arise from models like ours, they provide a detailed characterization of the dynamic statistical interdependence of a set of economic variables, which may help inevaluating causal hypotheses, without containing any such hypotheses themselves.

113 citations


Journal ArticleDOI
Robert M. Haralick1
TL;DR: From a Bayesian decision theoretic framework, it is shown that the reason why the usual statistical approaches do not take context into account is because of the assumptions made on the joint prior probability function andBecause of the simplistic loss function chosen.
Abstract: From a Bayesian decision theoretic framework, we show that the reason why the usual statistical approaches do not take context into account is because of the assumptions made on the joint prior probability function and because of the simplistic loss function chosen. We illustrate how the constraints sometimes employed by artificial intelligence researchers constitute a different kind of assumption on the joint prior probability function. We discuss a couple of loss functions which do take context into account and when combined with the joint prior probability constraint create a decision problem requiring a combinatorial state space search. We also give a theory for how probabilistic relaxation works from a Bayesian point of view.

100 citations


Proceedings Article
Eugene Charniak1
22 Aug 1983
TL;DR: It is shown that the objections most frequently raised against the use of Bayesian statistics within the Al-in-medicine community do not seem to hold and it is argued thatBayesian statistics is perfectly compatible with heuristic solutions to the multiple disease problem.
Abstract: In this paper, we show that the objections most frequently raised against the use of Bayesian statistics within the Al-in-medicine community do not seem to hold. In particular, we will show that the independence assumptions required to make Bayesian statistics computationally feasible are not nearly as damaging as has been claimed. We will also argue that Bayesian statistics is perfectly compatible with heuristic solutions to the multiple disease problem.

95 citations


Journal ArticleDOI
TL;DR: Bayesian methods have been applied to many problems, such as real estate tax assessment, economic forecasting, and monetary reform as mentioned in this paper, as well as the development of Bayesian computer programs.
Abstract: It is an honour to present this paper at St John's College, Cambridge, Sir Harold Jeffreys' college. As you all probably know, Sir Harold has made outstanding, pioneering contributions to the development of Bayesian statistical methodology and applications of it to many problems. In appreciation of his great work, our NBER-NSF Seminar on Bayesian Inference has recently published a book (Zellner, 1980a) honouring him. Jeffreys (1967) set a fine example for us by emphasizing both theory and applications in his work. It is this theme, the interaction between theory and application in Bayesian econometrics, that I shall emphasize in what follows. The rapid growth of Bayesian econometrics from its practically non-existent state in the early 1960s to the present (Zellner, 1981) has involved work on Bayesian inference and decision techniques, applications of them to econometric problems and development of Bayesian computer programs.? Selected applications include Geisel (1970, 1975) who used Bayesian prediction and odds ratios to compare the relative performance of simple Keynesian and Quantity of Money Theory macroeconomic models. Peck (1974) utilized Bayesian estimation techniques in an analysis of investment behaviour of firms in the electric utility industry. Varian (1975) developed and applied Bayesian methods for real estate tax assessment problems. Flood and Garber (1980a, b) applied Bayesian methods in study of monetary reforms using data from the German and several other hyperinflations. Evans (1978) employed posterior odds ratios in a study to determine which of three alternative models best explains the German hyperinflation data. Cooley and LeRoy (1981), Shiller (1973), Zellner and Geisel (1970), and Zellner and Williams (1973) employed a Bayesian approach in study of time series models for US money demand, investment and personal consumption data. Production function models have been analysed from the Bayesian point of view by Sankar (1969), Rossi (1980) and Zellner and Richard (1973). Tsurumi (1976) and Tsurumi and Tsurumi (1981) used Bayesian techniques to analyse structural change problems. Reynolds (1980) developed and applied Bayesian estimation and testing procedures in an analysis of survey data relating to health status, income and other variables. Litterman (1980) has formulated a Bayesian vector autoregressive model that he employed (and is employing) to generate forecasts of major US macroeconomic variables that compare very

Journal ArticleDOI
TL;DR: A process is described for analyzing failure data in which Bayes' theorem is used twice, firstly to develop a "prior" or "generic" probability distribution and secondly to specialize this distribution to the specific machine or system in question.
Abstract: A process is described for analyzing failure data in which Bayes' theorem is used twice, firstly to develop a "prior" or "generic" probability distribution and secondly to specialize this distribution to the specific machine or system in question. The process is shown by examples to be workable in practice as well as simple and elegant in concept.

Journal ArticleDOI
TL;DR: In this paper, specific inference for a particular effect, based only on data relevant to this effect, is valid regardless of the complexity of the design, and a standard Bayesian ANOVA is suggested as a direct extension of the usual F-test ANOVA.
Abstract: Whenever in a complex design inferences on separate effects are sought, the (overall) distributional assumptions of a general model are irrelevant. The specific inference approach is examined as a useful alternative to the conventional general model approach. The specific inference for a particular effect, based only on data relevant to this effect, is valid regardless of the complexity of the design. Specific inference is first discussed in terms of significance testing. It is argued that the usual ANOVA table can be regarded as a system of specific analyses, each one resting on a separate specific model in its own right. Then specific inference is discussed within a Bayesian framework. A ‘standard Bayesian ANOVA’ is suggested as a direct extension of the usual F-test ANOVA. Technical developments and methodological implications are outlined.

Posted Content
TL;DR: The authors developed a forecasting procedure based on a Bayesian method for estimating vector autoregressions, which is applied to ten macroeconomic variables and is shown to improve out-of-sample forecasts relative to univariate equations.
Abstract: This paper develops a forecasting procedure based on a Bayesian method for estimating vector autoregressions. The procedure is applied to ten macroeconomic variables and is shown to improve out-of-sample forecasts relative to univariate equations. Although cross-variables responses are damped by the prior, considerable interaction among the variables is shown to be captured by the estimates.We provide unconditional forecasts as of 1982:12 and 1983:3.We also describe how a model such as this can be used to make conditional projections and to analyze policy alternatives. As an example, we analyze a Congressional Budget Office forecast made in 1982:12.While no automatic causal interpretations arise from models like ours, they provide a detailed characterization of the dynamic statistical interdependence of a set of economic variables, which may help inevaluating causal hypotheses, without containing any such hypotheses themselves.

Journal ArticleDOI
TL;DR: A commonly used model for describing software failures is presented, and it is pointed out that some of the alternative models can be obtained by assigning specific prior distri- butions for the parameters of this model.
Abstract: In this paper we present a commonly used model for describing software failures, and point out that some of the alternative models can be obtained by assigning specific prior distri- butions for the parameters of this model. The likelihood function of an unknown parameter of the model poses some interesting issues and problems, which can be meaningfully addressed by adopting a Bayesian point of view. We present some real life data on software failures to illustrate the usefulness of the approach taken here.

Journal ArticleDOI
TL;DR: A large scale comparison of time series forecasting methods including the Bayesian is reported, using a simulation study to examine parameter sensitivity and an empirical study which contrasts Bayesian with other time series methods.
Abstract: ‘Bayesian forecasting’ is a time series method of forecasting which (in the United Kingdom) has become synonymous with the state space formulation of Harrison and Stevens (1976). The approach is distinct from other time series methods in that it envisages changes in model structure. A disjoint class of models is chosen to encompass the changes. Each data point is retrospectively evaluated (using Bayes theorem) to judge which of the models held. Forecasts are then derived conditional on an assumed model holding true. The final forecasts are weighted sums of these conditional forecasts. Few empirical evaluations have been carried out. This paper reports a large scale comparison of time series forecasting methods including the Bayesian. The approach is two fold: a simulation study to examine parameter sensitivity and an empirical study which contrasts Bayesian with other time series methods.

Journal ArticleDOI
H. Fluehler1, Andrew P. Grieve1, D. Mandallaz, J. Mau, H.A. Moser1 
TL;DR: The statistical methods required for a Bayesian analysis of bioequivalence are outlined and numerically illustrated and nominees helpful for the calculation of these probabilities are provided.

Journal ArticleDOI
TL;DR: Analytical results show that, with typical conjoint data, improvement may be expected over the estimation and prediction results obtained with ordinary least squares (OLS), and the expected improvement in prediction is confirmed by pilot empirical results.
Abstract: The authors propose a simple Bayesian approach which combines self-explicated data with conjoint data for estimating individual-level conjoint models. Analytical results show that, with typical con...

Journal ArticleDOI
TL;DR: It is concluded that for some medical problems Bayesian clas sification systems may be significantly more transferable to new sites than is gener ally believed and provides strong support for the utility of databases in building, transferring, and testing Bayesian classification systems in general.
Abstract: This study tested the hypothesis that probabilities derived from a large, geograph ically distant data base of stroke patients could form the basis of an accurate Baye sian decision support system for locally predicting the etiology of strokes. Perform ance of this "extrainstitutional" system on 100 cases was assessed retrospectively, both by error rate and using a new linear accuracy coefficient. This approach to patient classification was found to be surprisingly accurate when compared to classi fication by physicians and to Bayesian classification based on "low cost" local and subjective probabilities. We conclude that for some medical problems Bayesian clas sification systems may be significantly more transferable to new sites than is gener ally believed. Furthermore, this study provides strong support for the utility of clin ical databases in building, transferring, and testing Bayesian classification systems in general. (Med Decis Making 3:501-509, 1983)

Book ChapterDOI
01 Jan 1983
TL;DR: A general, Bayesian approach to robustification via model elaboration, illustrated by considering the elaboration of standard models to incorporate the possibility of non-standard distributional shapes or of individual aberrant observations, is introduced and discussed.
Abstract: A general, Bayesian approach to robustification via model elaboration is introduced and discussed. The approach is illustrated by considering the elaboration of standard models to incorporate the possibility of non-standard distributional shapes or of individual aberrant observations (outliers). Influence functions are then considered from a Bayesian point of view and an approach to robust time series analysis is outlined.

Journal ArticleDOI
TL;DR: A brief description of Bayesian analysis using Monte Carlo integration is given and an example is presented that illustrates the Bayesian estimation of an asymmetric density and includes a display of distribution and density functions generated from the posterior distribution.
Abstract: A brief description of Bayesian analysis using Monte Carlo integration is given. An example is presented that illustrates the Bayesian estimation of an asymmetric density and includes a display of distribution and density functions generated from the posterior distribution. Other papers are referenced that contain examples that illustrate the power of this approach (a) to handle more accurate formulations of real problems, (b) to analyse difficult models and data for small samples, and (c) to compute predictive distributions and posterior distributions for many functions of the parameters.

Journal ArticleDOI
TL;DR: This paper reviews some of the basic Bayesian methodology as applied to legal proceedings, and gives a Bayesian interpretation of some key legal phrases, and outlines two different applications of this approach in the context of actual legal cases.
Abstract: Many authors have suggested the use of the Bayesian mode of inference in legal settings. In this paper we review some of the basic Bayesian methodology as applied to legal proceedings, and give a Bayesian interpretation of some key legal phrases. We then outline two different applications of this approach in the context of actual legal cases.

Book ChapterDOI
I.J. Good1
01 Jan 1983
TL;DR: In this article, the robustness of a hierarchical model for multinomial and contingency tables has been investigated, using the Ockham-Duns razor and the principle of parsimony.
Abstract: Publisher Summary This chapter focuses on the robustness of a hierarchical model for multinomial and contingency tables. Hierarchical Bayesian models can be used in a completely Bayesian manner or in a manner that has been called semi-, or pseudo-, or quasi Bayesian. In the latter case, the hyperparameters, or possibly the hyperhyperparameters, etc., are estimated by non-Bayesian methods, or at least by some not purely Bayesian method such as by maximum likelihood. In the so-called non-Bayesian statistics, the use of the Ockham-Duns razor is sometimes called the principle of parsimony, and it encourages one to avoid having more parameters than are necessary. In hierarchical Bayesian methods, one similarly uses a principle of parsimony or hyper-razor. There are at least two different ways to test a model. One is by means of significance tests after observations are made. Another is by examining the robustness of a model, that is, by seeing if small changes in the model lead to small changes in the implications. Tests for robustness can sometimes be carried out by the device of imaginary results before making observations. When calculating a Bayes factor F1, reasonable robustness with respect to the choice of hyperhyperparameters has also been found.

Book ChapterDOI
01 Jan 1983
TL;DR: In this paper, the authors discuss the Bayes estimation problem in the parametric framework and show that the mixed Bayes-minimax rule can be used as an approximation to the original Bayes rule in the sense of convergence of the risks.
Abstract: Publisher Summary This chapter discusses the Bayes estimation. The Bayesian approach to statistical problems in the parametric framework has been thoroughly explored. It is known that the mixed Bayes-minimax rule dk can be thought of as an approximation to the Bayes rule in the sense of convergence of the Bayes risks. Therefore, if the Bayes rule is difficult to compute in a particular problem, the mixed Bayes-minimax rule can be used as an approximation. To study the Bayes solution, a prior distribution has to be specified on the space of distribution functions, and to obtain the mixed Bayes-minimax solutions, a distribution has to be specified for the random distribution function Fk that is linear between the points.

Journal ArticleDOI
TL;DR: In this article, a simple Bayesian mechanism for making probability assessments is discussed and the properties of their calibration, semicalibration, sufficiency, and domination properties are discussed.
Abstract: DeGroot and Fienberg (1982a) recently considered various aspects of the problem of evaluating the performance of probability appraisers. After briefly reviewing their notions of calibration and sufficiency we introduce related ideas of semicalibration and domination and consider their relationship to the earlier concepts. We then discuss some simple Bayesian mechanisms for making probability assessments and study their calibration, semicalibration, sufficiency, and domination properties. Finally, several results concerning the comparison of finite dichotomous experiments, relevant to the present work, are collected in an Appendix.

Journal ArticleDOI
TL;DR: In this article, the authors developed probability density functions (p.d.f.) for the widely used power generation reliability indices, Loss of Load and Unserved Energy, and derived the equations to calculate the parameters of the distributions of these indices upon a prescribed load plan.
Abstract: The primary objective of this research is to analytically develop probability density functions (p.d.f.) for the widely used power generation reliability indices, Loss of Load and Unserved Energy. The equations to calculate the parameters of the distributions of these indices upon a prescribed load plan are derived. In order to develop the theoretical structure for the problem stated, classical and decision theoretic (Bayesian) statistical inference are used as major tools along with the univariate and multivariate asymptotic theory. Consequently, an approximate numerical multiple integration scheme is employed to compute the parameters of the asymptotic normal densities of the reliability indices for the sample power networks. The authors believe that this statistical approach offers a more realistic alternative to the conventional reliability evaluation in generation systems; that is, to the calculation of an averaged valtie for the Loss of Load and Unserved Energy where outage data is traditionally assumed to be deterministic with certainty.

Journal ArticleDOI
TL;DR: In this paper, the Bayesian approach to solving various inferential problems related to dependent bivariate distributions is explored by using Ferguson's theory of Dirichlet processes, and a Bayesian test for testing positive dependence is derived.
Abstract: In this article the Bayesian approach to solving various inferential problems related to dependent bivariate distributions is explored by using Ferguson's theory of Dirichlet processes. Analogs of Kendall's τ and the concordance coefficient are defined in Section 2 to deal with discrete data. The Bayesian estimator under the squared error loss turns out to be a slightly modified version of Kendall's . The analysis is carried out to include the empirical Bayes approach in Section 3. In Section 4 a Bayesian test for testing positive dependence is derived. Some small sample comparisons are carried out in Section 5. Finally a numerical illustrative example is given in Section 6.

Book ChapterDOI
01 Jan 1983
TL;DR: In this paper, the authors consider four topics where naive application of frequentist statistical theory can lead to incorrect or unhelpful inferences, whereas careful attention to the above questions can leads to sensible frequentist inferences.
Abstract: Publisher Summary This chapter discusses frequentist inferences. The major operational difference between Bayesian and frequentist inferences is that in the latter, one must choose a reference set for the sample to obtain inferential probabilities. In the matter of choosing a reference set, Sherlock Holmes was right, and that many frequentist inferences are inadequate because of erroneous choices made prior to the experiment. Most of the mathematical development has to do with predata analysis. However, the question remain was that, “Is such-and-such likely to be a good procedure?” It is evident that relative frequency in a real sequence of repeated experiments is simply not a rich enough interpretation of probability. The brief general discussion has raised questions about the difference between pre- and postdata probability calculations, the legitimacy of exact theory when integrated with practical application, and careful specification and understanding of what a statistical model means in practical terms. The purpose of the more detailed discussion which follows is to consider four topics where naive application of frequentist statistical theory can lead to incorrect or unhelpful inferences, whereas careful attention to the above questions can lead to sensible frequentist inferences. The four topics to be discussed are likelihood inference, inference from transformed data, randomization in design of experiments and surveys, and robust estimation. The Bayesian analysis has the general advantage of responding to specific features of the data. The unconditional distribution theory prevalent in robustness literature is fine for choosing estimates which are generally good but not for analysis of a particular data set.

Book ChapterDOI
01 Jan 1983
TL;DR: In this article, the authors present a case study of the robustness of Bayesian methods of inference estimating the total in a finite population using transformations to normality, and they use a small, real data set with a known value for the quantity to be estimated.
Abstract: Publisher Summary This chapter presents a case study of the robustness of Bayesian methods of inference estimating the total in a finite population using transformations to normality. Bayesian intervals provide interval estimates that can legitimately be interpreted as such or at least to offer guidance as to when the intervals that are provided can be safely interpreted in this manner. The potential application of the statistical methods is often demonstrated either theoretically, from artificial data generated following some convenient analytic form, or from real data without a known correct answer. The case study presented here uses a small, real data set with a known value for the quantity to be estimated. It is surprising and instructive to see the care that may be needed to arrive at satisfactory inferences with real data. Simulation techniques are not needed to estimate totals routinely in practice. If stratification variables were available, that is, categorizing the municipalities into villages, towns, cities, and boroughs of New York City, to estimate the population total from a sample of 100, oversampling the large municipalities would be highly desirable. Robustness is not a property of data alone or questions alone, but particular combinations of data, questions and families of models. In many problems, statisticians may be able to define the questions being studied so as to have robust answers.