scispace - formally typeset
Search or ask a question

Showing papers on "Bayes' theorem published in 1996"


Book
15 May 1996
TL;DR: Approaches for Statistical Inference: The Bayes Approach, Model Criticism and Selection, and Performance of Bayes Procedures.
Abstract: Approaches for Statistical Inference. The Bayes Approach. The Empirical Bayes Approach. Performance of Bayes Procedures. Bayesian Computation. Model Criticism and Selection. Special Methods and Models. Case Studies. Appendices.

2,413 citations


Journal ArticleDOI
TL;DR: This paper addresses the problem of evaluating the predictive uncertainty of TOPMODEL using the Bayesian Generalised Likelihood Uncertainty Estimation (GLUE) methodology in an application to the small Ringelbach research catchment in the Vosges, France.
Abstract: This paper addresses the problem of evaluating the predictive uncertainty of TOPMODEL using the Bayesian Generalised Likelihood Uncertainty Estimation (GLUE) methodology in an application to the small Ringelbach research catchment in the Vosges, France. The wide range of parameter sets giving acceptable simulations is demonstrated, and uncertainty bands are presented based on different likelihood measures. It is shown how the distributions of predicted discharges are non-Gaussian and vary in shape through time and with discharge. Updating of the likelihood weights using Bayes equation is demonstrated after each year of record and it is shown how the additional data can be evaluated in terms of the way they constrain the uncertainty bands.

807 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the problem of evaluating the goodness of statistical models from an information-theoretic point of view and proposed information criteria for evaluating models constructed by various estimation procedures when the specified family of probability distributions does not contain the distribution generating the data.
Abstract: SUMMARY The problem of evaluating the goodness of statistical models is investigated from an information-theoretic point of view. Information criteria are proposed for evaluating models constructed by various estimation procedures when the specified family of probability distributions does not contain the distribution generating the data. The proposed criteria are applied to the evaluation of models estimated by maximum likelihood, robust, penalised likelihood, Bayes procedures, etc. We also discuss the use of the bootstrap in model evaluation problems and present a variance reduction technique in the bootstrap simulation.

432 citations


Journal ArticleDOI
TL;DR: The Decision-Theoretic Foundations of Statistical Inference as discussed by the authors, from Prior Information to Prior Distributions, Tests and Confidence Regions, Admissibility and Complete Classes, Invariance, Haar Measures, and Equivariant Estimators.
Abstract: Contents: Introduction.- Decision-Theoretic Foundations of Statistical Inference.- From Prior Information to Prior Distributions.- Bayesian Point Estimation.- Tests and Confidence Regions.- Admissibility and Complete Classes.- Invariance, Haar Measures, and Equivariant Estimators.- Hierarchical and Empirical Bayes Extensions.- Bayesian Calculations.- A Defense of the Bayesian Choice.

313 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derived Bayes estimates of the parameters and functions thereof in the left-truncated exponential distribution and used asymmetric loss functions to reflect that, in most situations of interest, overestimation of a parameter does not produce the same economic consequence than underestimation.
Abstract: In this paper, Bayes estimates of the parameters and functions thereof in the left-truncated exponential distribution are derived. Asymmetric loss functions are used to reflect that, in most situations of interest, overestimation of a parameter does not produce the same economic consequence than underestimation. Both the non-informative prior and an informative prior on the reliability level at a prefixed time value are considered, and the statistical performances of the Bayes estimates are compared to those of the maximum likelihood ones through the risk function.

241 citations


Book ChapterDOI
26 Aug 1996
TL;DR: New results are presented which show that within a Bayesian framework not only grammars, but also logic programs are learnable with arbitrarily low expected error from positive examples only and the upper bound for expected error of a learner which maximises the Bayes' posterior probability is within a small additive term of one which does the same from a mixture of positive and negative examples.
Abstract: Gold showed in 1967 that not even regular grammars can be exactly identified from positive examples alone. Since it is known that children learn natural grammars almost exclusively from positives examples, Gold's result has been used as a theoretical support for Chomsky's theory of innate human linguistic abilities. In this paper new results are presented which show that within a Bayesian framework not only grammars, but also logic programs are learnable with arbitrarily low expected error from positive examples only. In addition, we show that the upper bound for expected error of a learner which maximises the Bayes' posterior probability when learning from positive examples is within a small additive term of one which does the same from a mixture of positive and negative examples. An Inductive Logic Programming implementation is described which avoids the pitfalls of greedy search by global optimisation of this function during the local construction of individual clauses of the hypothesis. Results of testing this implementation on artificially-generated data-sets are reported. These results are in agreement with the theoretical predictions.

231 citations


BookDOI
01 Jul 1996
TL;DR: This volume contains selections from among the presentations at the Thirteenth International Workshop on Maximum Entropy and Bayesian Methods- MAXENT93 for short- held at the University of California, Santa Barbara (UCSB), August 1-5, 1993.
Abstract: : This volume contains selections from among the presentations at the Thirteenth International Workshop on Maximum Entropy and Bayesian Methods- MAXENT93 for short- held at the University of California, Santa Barbara (UCSB), August 1-5, 1993. This annual workshop is devoted to the theory and practice of Bayesian probability and the use of the maximum entropy principle in assigning prior probabilities. Like its predecessors, MAXENT93 attracted researchers and scholars representing a wide diversity of disciplines and applications. These included physicists, geophysicists, astronomers, statisticians, engineers, and economists, among others. Indeed Bayesian methods increasingly compel the interest of any who would apply scientific inference. The impressive successes, so evident in the proceedings of the past workshops, when adherence to Bayesian principles replaces popular ad hoc approaches in problems of inference, continue. Many are reported in this volume. It is perhaps indicative of the growing acceptance of Bayesian methods that the most prominent controversy at the thirteenth workshop was not a Bayesian- frequents confrontation but rather a disagreement over the suitability of using an approximation in the Bayesian formalism.

222 citations


Journal ArticleDOI
TL;DR: In this article, a unified approach to the nonhomogeneous Poisson process in software reliability models is given, which models the epochs of failures according to a general order statistics model or to a record value statistics model.
Abstract: A unified approach to the nonhomogeneous Poisson process in software reliability models is given. This approach models the epochs of failures according to a general order statistics model or to a record value statistics model. Their corresponding point processes can be related to the nonhomogeneous Poisson processes, for example, the Goel—Okumoto, the Musa—Okumoto, the Duane, and the Cox—Lewis processes. Bayesian inference for the nonhomogeneous Poisson processes is studied. The Gibbs sampling approach, sometimes with data augmentation and with the Metropolis algorithm, is used to compute the Bayes estimates of credible sets, mean time between failures, and the current system reliability. Model selection based on a predictive likelihood is studied. A numerical example with a real software failure data set is given.

167 citations


Journal ArticleDOI
TL;DR: This paper presents a novel approach based upon the application of Bayes' theorem to ordinal and categorical data, which overcomes many of the problems associated with regression analysis.
Abstract: Much of the data which appears in the forensic and archaeological literature is ordinal or categorical. This is particularly true of the age related indicators presented by Gustafson in his method of human adult age estimation using the structural changes in human teeth. This technique is still being modified and elaborated. However, the statistical methods of regression analysis employed by Gustafson and others are not particularly appropriate to this type of data, but are still employed because alternatives have not yet been explored. This paper presents a novel approach based upon the application of Bayes' theorem to ordinal and categorical data, which overcomes many of the problems associated with regression analysis.

138 citations


Journal ArticleDOI
TL;DR: In this paper, the empirical Bayes likelihood theory is extended to situations where the θ k's have a regression structure as well as an empiri cation structure, and the results are presented in the form of a realistic computational scheme that allows model building and model checking in the spirit of regression analysis.
Abstract: Suppose that several independent experiments are observed, each one yielding a likelihood L k (θ k ) for a real-valued parameter of interest θ k . For example, θ k might be the log-odds ratio for a 2 × 2 table relating to the kth population in a series of medical experiments. This article concerns the following empirical Bayes question: How can we combine all of the likelihoods L k to get an interval estimate for any one of the θ k 's, say θ1? The results are presented in the form of a realistic computational scheme that allows model building and model checking in the spirit of a regression analysis. No special mathematical forms are required for the priors or the likelihoods. This scheme is designed to take advantage of recent methods that produce approximate numerical likelihoods L k (θ k ) even in very complicated situations, with all nuisance parameters eliminated. The empirical Bayes likelihood theory is extended to situations where the θ k 's have a regression structure as well as an empiri...

130 citations


Journal ArticleDOI
07 Sep 1996-BMJ
TL;DR: In this week's BMJ, Lilford and Braunholtz explain the basis of Bayesian statistical theory and explore its use in evaluating evidence from medical research and incorporating such evidence into policy decisions about public health.
Abstract: In this week's BMJ, Lilford and Braunholtz (p 603) explain the basis of Bayesian statistical theory.1 They explore its use in evaluating evidence from medical research and incorporating such evidence into policy decisions about public health. When drawing inferences from statistical data, Bayesian theory is an alternative to the frequentist theory that has predominated in medical research over the past half century. As explained by Lilford and Braunholtz, the main difference between the two theories is the way they deal with probability. Consider a clinical trial comparing treatments A and B. Frequentist analysis may conclude that treatment A is superior because there is a low probability that such an extreme difference would have been observed when the treatments were in fact equivalent. Bayesian analysis begins with the observed difference and then asks how likely is it that treatment A is in fact superior to B. In other words, frequentists deduce the probability of observing an outcome given the true underlying state (in this case no difference between treatments), while Bayesians induce the probability of the existence of the true but as yet unknown underlying state (in this case, A is superior to B) given the data. The difference is quite profound, and, although the conclusions reached by applying the two methods may be qualitatively the same, the mode of expressing those conclusions will always be different. For example, a frequentist may conclude that the difference between treatments A and B is highly significant (P = 0.002), meaning that the chance of observing such an extreme difference when A and B are in fact …

Journal ArticleDOI
TL;DR: Comparisons of two computerized testing procedures based on sequential probability ratio test and sequential Bayes methodology showed that under the conditions studied, the SPRT procedure required fewer test items than the Sequential Bayes procedure to achieve the same level of classification accuracy.
Abstract: Many testing applications focus on classifying examinees into one of two categories (e.g., pass/fail) rather than on obtaining an accurate estimate of level of ability. Examples of such applications include licensure and certification, college selection, and placement into entry-level or developmental college courses. With the increased availability of computers for the administration and scoring of tests, computerized testing procedures have been developed for efficiently making these classification decisions. The purpose of the research reported in this article was to compare two such procedures, one based on the sequential probability ratio test and the other on sequential Bayes methodology, to determine which required fewer items for classification when the procedures were matched on classification error rates. The results showed that under the conditions studied, the SPRT procedure required fewer test items than the sequential Bayes procedure to achieve the same level of classification accuracy.

Journal ArticleDOI
TL;DR: A Bayes model for step-stress accelerated life testing where the failure times at each stress level are exponentially distributed, but strict adherence to a time-transformation function is not required.
Abstract: This paper develops a Bayes model for step-stress accelerated life testing. The failure times at each stress level are exponentially distributed, but strict adherence to a time-transformation function is not required. Rather, prior information is used to define indirectly a multivariate prior distribution for the failure rates at the various stress levels. Our prior distribution preserves the natural ordering of the failure rates in both the prior and posterior estimates. Methods are developed for Bayes point estimates as well as for making probability statements for use-stress life parameters. The approach is illustrated with an example.

Journal ArticleDOI
TL;DR: This work transfers Lorden's approach to a continuous time model and discusses the structure of the Bayes risk, and shows the minimax optimality of the cusum procedures, when the initial and nal distribution are both known.
Abstract: We consider, in a Bayesian framework, the model $W_t = B_t + \theta (t - u)^+$, where B is a standard Brownian motion, $\theta$ is arbitrary but known and $ u$ is the unknown change-point. We transfer the construction of Ritov to this continuous time setup and show that the corresponding Bayes problems can be reduced to generalized parking problems.

Journal ArticleDOI
TL;DR: This work investigates several VQ-based algorithms that seek to minimize both the distortion of compressed images and errors in classifying their pixel blocks and introduces a tree-structured posterior estimator to produce the class posterior probabilities required for the Bayes risk computation in this design.
Abstract: Classification and compression play important roles in communicating digital information. Their combination is useful in many applications, including the detection of abnormalities in compressed medical images. In view of the similarities of compression and low-level classification, it is not surprising that there are many similar methods for their design. Because some of these methods are useful for designing vector quantizers, it seems natural that vector quantization (VQ) is explored for the combined goal. We investigate several VQ-based algorithms that seek to minimize both the distortion of compressed images and errors in classifying their pixel blocks. These algorithms are investigated with both full search and tree-structured codes. We emphasize a nonparametric technique that minimizes both error measures simultaneously by incorporating a Bayes risk component into the distortion measure used for the design and encoding. We introduce a tree-structured posterior estimator to produce the class posterior probabilities required for the Bayes risk computation in this design. For two different image sources, we demonstrate that this system provides superior classification while maintaining compression close or superior to that of several other VQ-based designs, including Kohonen's (1992) "learning vector quantizer" and a sequential quantizer/classifier design.

Journal ArticleDOI
TL;DR: The higher order Bayesian neural network is evaluated on a real world task of diagnosing a telephone exchange computer and by introducing stochastic spiking units, and soft interval coding, it is also possible to handle uncertain as well as continuous valued inputs.
Abstract: We treat a Bayesian confidence propagation neural network, primarily in a classifier context. The one-layer version of the network implements a naive Bayesian classifier, which requires the input attributes to be independent. This limitation is overcome by a higher order network. The higher order Bayesian neural network is evaluated on a real world task of diagnosing a telephone exchange computer. By introducing stochastic spiking units, and soft interval coding, it is also possible to handle uncertain as well as continuous valued inputs.

Proceedings ArticleDOI
25 Aug 1996
TL;DR: This article presents a novel approach to estimating the Bayes error based on classifier combining techniques, and finds that the combiner-based estimate outperforms the classical methods.
Abstract: The Bayes error provides the lowest achievable error rate for a given pattern classification problem. There are several classical approaches for estimating or finding bounds for the Bayes error. One type of approach focuses on obtaining analytical bounds, which are both difficult to calculate and dependent on distribution parameters that may not be known. Another strategy is to estimate the class densities through non-parametric methods, and use these estimates to obtain bounds on the Bayes error. This article presents a novel approach to estimating the Bayes error based on classifier combining techniques. For an artificial data set where the Bayes error is known, the combiner-based estimate outperforms the classical methods.

Journal ArticleDOI
TL;DR: Through analyses of the data from an innovative mathematics curriculum, it is examined when and why it becomes important to employ a fully Bayesian approach and the need to study the sensitivity of results to alternative prior distributional assumptions for the variance components and for the random regression parameters is discussed.
Abstract: In applications of hierarchical models (HMs), a potential weakness of empirical Bayes estimation approaches is that they do not to take into account uncertainty in the estimation of the variance components (see, e.g., Dempster, 1987). One possible solution entails employing a fully Bayesian approach, which involves specifying a prior probability distribution for the variance components and then integrating over the variance components as well as other unknowns in the HM to obtain a marginal posterior distribution of interest (see, e.g., Draper, 1995; Rubin, 1981). Though the required integrations are often exceedingly complex, Markov-chain Monte Carlo techniques (e.g., the Gibbs sampler) provide a viable means of obtaining marginal posteriors of interest in many complex settings. In this article, we fully generalize the Gibbs sampling algorithms presented in Seltzer (1993) to a broad range of settings in which vectors of random regression parameters in the HM (e.g., school means and slopes) are assumed mu...

Journal ArticleDOI
TL;DR: This paper derives marginal maximum likelihood (MML) estimation equations for the structural parameters of the Saltus model and suggests a computing approximation based on the EM algorithm.
Abstract: Item response theory models posit latent variables to account for regularities in students' performances on test items. Wilson's “Saltus” model extends the ideas of IRT to development that occurs in stages, where expected changes can be discontinuous, show different patterns for different types of items, or even exhibit reversals in probabilities of success on certain tasks. Examples include Piagetian stages of psychological development and Siegler's rule-based learning. This paper derives marginal maximum likelihood (MML) estimation equations for the structural parameters of the Saltus model and suggests a computing approximation based on the EM algorithm. For individual examinees, empirical Bayes probabilities of learning-stage are given, along with proficiency parameter estimates conditional on stage membership. The MML solution is illustrated with simulated data and an example from the domain of mixed number subtraction.

Book
09 Jan 1996
TL;DR: In this paper, the authors present a review of decision theory and its application in regression models, including stringent and UMP-invariant hypothesis tests for regression models and asymptotic optimality of ML and LR.
Abstract: Lists of Symbols and Notation. Introduction. Estimators for Regression Models: Least Squares and Projections. Maximum Likelihood. Bayesian Estimators for Regression. Minimax Estimators. Robust Regression. Hypothesis Tests for Regression Models: Stringent Tests. UMP Invariant Hypothesis Tests. Some Tests for Regression Models. Applications: F and Related Tests. Similar Regression Models. Asymptotic Theory: Consistency of Tests and Estimators: Direct Methods. Consistency of Estimators: Indirect Methods. Asymptotic Distributions. More Accurate Asymptotic Approximations. Asymptotic Optimality of ML and LR. Empirical Bayes: Applications: Simple Examples. Utilizing Information of Uncertain Validity. Hierarchical Bayes and the Gibbs Sampler. Appendices: The Multivariate Normal Distribution. Uniformly Most Powerful Tests. A Review of Decision Theory. Bibliography. Index.

Book ChapterDOI
01 Jan 1996
TL;DR: A Bayesian approach to the unsupervised discovery of classes in a set of cases, sometimes called finite mixture separation or clustering, which allows direct comparison of alternate density functions that differ in number of classes and/or individual class density functions.
Abstract: We describe a Bayesian approach to the unsupervised discovery of classes in a set of cases, sometimes called finite mixture separation or clustering. The main difference between clustering and our approach is that we search for the “best” set of class descriptions rather than grouping the cases themselves. We describe our classes in terms of probability distribution or density functions, and the locally maximal posterior probability parameters. We rate our classifications with an approximate posterior probability of the distribution function w.r.t. the data, obtained by marginalizing over all the parameters. Approximation is necessitated by the computational complexity of the joint probability, and our marginalization is w.r.t. a local maxima in the parameter space. This posterior probability rating allows direct comparison of alternate density functions that differ in number of classes and/or individual class density functions.

Journal ArticleDOI
TL;DR: Compared with the RE model, the Bayesian methods are demonstrated to be relatively robust against a wide choice of specifications of such information on heterogeneity, and allow for more detailed and satisfactory statements to be made, not only about the overall risk but about the individual studies, on the basis of the combined information.

Journal ArticleDOI
TL;DR: Empirical Bayes estimators based on the two models are judged according to their ability to provide parameter estimates in a Cox model predicting clinical outcomes.
Abstract: In this paper we consider the choice of model used in estimation of trajectories of CD4 T-cell counts by empirical Bayes estimators. Tsiatis et al. have demonstrated that empirical Bayes estimates of CD4 values correct for the bias resulting from measurement error when using CD4 as a covariate in a Cox model to predict clinical events. Here, empirical Bayes estimates from a random effects model are compared to estimates from the more general stochastic regression model presented in Taylor et al. Empirical Bayes estimators based on the two models are judged according to their ability to provide parameter estimates in a Cox model predicting clinical outcomes. Data from ACTG 118 are used as an illustration.

Journal ArticleDOI
TL;DR: A consistent nonparametric empirical Bayes estimator of the prior density for directional data is proposed, to use Fourier analysis on S 2 to adapt Euclidean techniques to this non-Euclidean environment.
Abstract: This paper proposes a consistent nonparametric empirical Bayes estimator of the prior density for directional data The methodology is to use Fourier analysis on $S^2$ to adapt Euclidean techniques to this non-Euclidean environment General consistency results are obtained In addition, a discussion of efficient numerical computation of Fourier transforms on $S^2$ is given, and their applications to the methods suggested in this paper are sketched

Journal ArticleDOI
TL;DR: This paper estimates component reliability from masked series-system life data, viz, data where the exact component causing system failure might be unknown, using a Bayes approach which considers prior information on the component reliabilities.
Abstract: This paper estimates component reliability from masked series-system life data, viz, data where the exact component causing system failure might be unknown. It focuses on a Bayes approach which considers prior information on the component reliabilities. In most practical settings, prior engineering knowledge on component reliabilities is extensive. Engineers routinely use prior knowledge and judgment in a variety of ways. The Bayes methodology proposed here provides a formal, realistic means of incorporating such subjective knowledge into the estimation process. In the event that little prior knowledge is available, conservative or even noninformative priors, can be selected. The model is illustrated for a 2-component series system of exponential components. In particular it uses discrete-step priors because of their ease of development and interpretation. By taking advantage of the prior information, the Bayes point-estimates consistently perform well, i.e., are close to the MLE. While the approach is computationally intensive, the calculations can be easily computerized.

Book
04 Apr 1996
TL;DR: 1. An Introduction to MINITAB 2. Simulating Games of Chance 3. Introduction to Inference Using Bayes" Rule 4. learning about a Proportion 5. Comparing Two Proportions 6. Learning About a Normal Mean 7. Learning about Relationships: Regression and Contingency Tables
Abstract: 1. An Introduction to MINITAB 2. Simulating Games of Chance 3. Introduction to Inference Using Bayes" Rule 4. Learning About a Proportion 5. Comparing Two Proportions 6. Learning About a Normal Mean 7. Learning About Two Normal Means 8. Learning about Relationships: Regression and Contingency Tables 9. Learning about Discrete Models 10. Learning about Continuous Models 11. General Methods of Summarizing Posterior Distributions APPENDICES: List of MINITAB Macros / Formula Used in the Macros / Index

Journal ArticleDOI
TL;DR: Bayesian inference under the principles of evolutionary parsimony is shown to be well calibrated with reasonable discriminating power for a wide range of realistic conditions, including conditions that violate the assumptions of evolutionary Parsimony.
Abstract: The reconstruction of phylogenetic trees from molecular sequences presents unusual problems for statistical inference. For example, three possible alternatives must be considered for four taxa when inferring the correct unrooted tree (referred to as a topology). In our view, classical hypothesis testing is poorly suited to this triangular set of alternative hypotheses. In this article, we develop Bayesian inference to determine the posterior probability that a four-taxon topology is correct given the sequence data and the evolutionary parsimony algorithm for phylogenetic reconstruction. We assess the frequency properties of our models in a large simulation study. Bayesian inference under the principles of evolutionary parsimony is shown to be well calibrated with reasonable discriminating power for a wide range of realistic conditions, including conditions that violate the assumptions of evolutionary parsimony.

Journal ArticleDOI
TL;DR: An empirical Bayes method is proposed for small area estimation of the prevalence of non-rare conditions, whose variability is binomial and cannot be approximated by a Poisson model.
Abstract: Geographical studies are becoming increasingly common in epidemiology. The problems of small area investigations are well known, and several methods are available for the estimation and mapping of disease risk across small areas, with the emphasis mainly on applications concerning rare disease incidence or mortality. An empirical Bayes method is proposed for small area estimation of the prevalence of non-rare conditions, whose variability is binomial and cannot be approximated by a Poisson model. It is the direct equivalent of a semi-parametric non-iterative moment estimation method proposed in the Poisson case. As an example, the geographical distribution of the prevalence of respiratory symptoms in schoolchildren across 71 small areas in Huddersfield, Northern England is studied. Whereas random variability causes the crude area-specific prevalences to be unstable, the posterior estimates, corrected towards overall or local means, are capable of highlighting genuine extra-binomial variability. The method is very simple and can readily be applied to the study of a number of common conditions.

Journal ArticleDOI
01 Jan 1996-Genetics
TL;DR: Three Bayesian point estimators are compared with four traditional ones using the results of 10,000 simulated experiments and are found to be at least as efficient as the best of the previously known estimators.
Abstract: Bayesian procedures are developed for estimating mutation rates from fluctuation experiments. Three Bayesian point estimators are compared with four traditional ones using the results of 10,000 simulated experiments. The Bayesian estimators were found to be at least as efficient as the best of the previously known estimators. The best Bayesian estimator is one that uses (1/m(2)) as the prior probability density function and a quadratic loss function. The advantage of using these estimators is most pronounced when the number of fluctuation test tubes is small. Bayesian estimation allows the incorporation of prior knowledge about the estimated parameter, in which case the resulting estimators are the most efficient. It enables the straightforward construction of confidence intervals for the estimated parameter. The increase of efficiency with prior information and the narrowing of the confidence intervals with additional experimental results are investigated. The results of the simulations show that any potential inaccuracy of estimation arising from lumping together all cultures with more than n mutants (the jackpots) almost disappears at n = 70 (provided that the number of mutations in a culture is low). These methods are applied to a set of experimental data to illustrate their use.

Journal ArticleDOI
TL;DR: In this article, the inverse probability theorem of Bayes is used along with sampling theory to obtain objective criteria for choosing among rival models, and formulas for the relative posterior probabilities of candidate models and for their goodness of fit, when the models are fitted to a common data set with Normally distributed errors.
Abstract: The inverse probability theorem of Bayes is used, along with sampling theory, to obtain objective criteria for choosing among rival models. Formulas are given for the relative posterior probabilities of candidate models and for their goodness of fit, when the models are fitted to a common data set with Normally distributed errors. Cases of full, partial and minimal variance information are treated. The formulas are demonstrated with three examples (Example 1 treats linear models for data with a given variance; Example 2 treats a pair of nonlinear models, using a variance estimated by replication; and Example 3 treats a set of eighteen nonlinear models, using a variance estimated from residuals of a higher-order model (a kinetic study of hydrogenation of mixed isooctenes over a supported nickel catalyst))