scispace - formally typeset
Search or ask a question

Showing papers on "Bayesian probability published in 1992"


Journal ArticleDOI
TL;DR: The focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normal- ity after transformations and marginalization, and the results are derived as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations.
Abstract: The Gibbs sampler, the algorithm of Metropolis and similar iterative simulation methods are potentially very helpful for summarizing multivariate distributions. Used naively, however, iterative simulation can give misleading answers. Our methods are simple and generally applicable to the output of any iterative simulation; they are designed for researchers primarily interested in the science underlying the data and models they are analyzing, rather than for researchers interested in the probability theory underlying the iterative simulations themselves. Our recommended strategy is to use several independent sequences, with starting points sampled from an overdispersed distribution. At each step of the iterative simulation, we obtain, for each univariate estimand of interest, a distributional estimate and an estimate of how much sharper the distributional estimate might become if the simulations were continued indefinitely. Because our focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normality after transformations and marginalization, we derive our results as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations. The methods are illustrated on a random-effects mixture model applied to experimental measurements of reaction times of normal and schizophrenic patients.

13,884 citations


Journal ArticleDOI
01 May 1992
TL;DR: The Bayesian approach to regularization and model-comparison is demonstrated by studying the inference problem of interpolating noisy data by examining the posterior probability distribution of regularizing constants and noise levels.
Abstract: Although Bayesian analysis has been in use since Laplace, the Bayesian method of model-comparison has only recently been developed in depth. In this paper, the Bayesian approach to regularization and model-comparison is demonstrated by studying the inference problem of interpolating noisy data. The concepts and methods described are quite general and can be applied to many other data modeling problems. Regularizing constants are set by examining their posterior probability distribution. Alternative regularizers (priors) and alternative basis sets are objectively compared by evaluating the evidence for them. Occam's razor is automatically embodied by this process. The way in which Bayes infers the values of regularizing constants and noise levels has an elegant interpretation in terms of the effective number of parameters determined by the data set. This framework is due to Gull and Skilling.

4,194 citations


Journal ArticleDOI
TL;DR: A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks that automatically embodies "Occam's razor," penalizing overflexible and overcomplex models.
Abstract: A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian "evidence" automatically embodies "Occam's razor," penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained.

2,906 citations


Proceedings Article
12 Jul 1992
TL;DR: An average-case analysis of the Bayesian classifier, a simple induction algorithm that fares remarkably well on many learning tasks, and explores the behavioral implications of the analysis by presenting predicted learning curves for artificial domains.
Abstract: In this paper we present an average-case analysis of the Bayesian classifier, a simple induction algorithm that fares remarkably well on many learning tasks. Our analysis assumes a monotone conjunctive target concept, and independent, noise-free Boolean attributes. We calculate the probability that the algorithm will induce an arbitrary pair of concept descriptions and then use this to compute the probability of correct classification over the instance space. The analysis takes into account the number of training instances, the number of attributes, the distribution of these attributes, and the level of class noise. We also explore the behavioral implications of the analysis by presenting predicted learning curves for artificial domains, and give experimental results on these domains as a check on our reasoning.

1,328 citations


Journal Article
TL;DR: In this article, a sampling-resampling perspective on Bayesian inference is presented, which has both pedagogic appeal and suggests easily implemented calculation strategies, such as sampling-based methods.
Abstract: Even to the initiated, statistical calculations based on Bayes's Theorem can be daunting because of the numerical integrations required in all but the simplest applications. Moreover, from a teaching perspective, introductions to Bayesian statistics—if they are given at all—are circumscribed by these apparent calculational difficulties. Here we offer a straightforward sampling-resampling perspective on Bayesian inference, which has both pedagogic appeal and suggests easily implemented calculation strategies.

861 citations


Journal ArticleDOI
TL;DR: A straightforward sampling-resampling perspective on Bayesian inference is offered, which has both pedagogic appeal and suggests easily implemented calculation strategies.
Abstract: Even to the initiated, statistical calculations based on Bayes's Theorem can be daunting because of the numerical integrations required in all but the simplest applications. Moreover, from a teaching perspective, introductions to Bayesian statistics—if they are given at all—are circumscribed by these apparent calculational difficulties. Here we offer a straightforward sampling-resampling perspective on Bayesian inference, which has both pedagogic appeal and suggests easily implemented calculation strategies.

852 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the Bayesian framework for model comparison described for regression models in MacKay (1992a,b) can also be applied to classification problems and an information-based data selection criterion is derived and demonstrated within this framework.
Abstract: Three Bayesian ideas are presented for supervised adaptive classifiers. First, it is argued that the output of a classifier should be obtained by marginalizing over the posterior distribution of the parameters; a simple approximation to this integral is proposed and demonstrated. This involves a "moderation" of the most probable classifier's outputs, and yields improved performance. Second, it is demonstrated that the Bayesian framework for model comparison described for regression models in MacKay (1992a,b) can also be applied to classification problems. This framework successfully chooses the magnitude of weight decay terms, and ranks solutions found using different numbers of hidden units. Third, an information-based data selection criterion is derived and demonstrated within this framework.

768 citations


DissertationDOI
01 Jan 1992
TL;DR: The Bayesian framework for model comparison and regularisation is demonstrated by studying interpolation and classification problems modelled with both linear and non-linear models, and it is shown that the careful incorporation of error bar information into a classifier's predictions yields improved performance.
Abstract: The Bayesian framework for model comparison and regularisation is demonstrated by studying interpolation and classification problems modelled with both linear and non-linear models. This framework quantitatively embodies 'Occam's razor'. Over-complex and under-regularised models are automatically inferred to be less probable, even though their flexibility allows them to fit the data better. When applied to 'neural networks', the Bayesian framework makes possible (1) objective comparison of solutions using alternative network architectures; (2) objective stopping rules for network pruning or growing procedures; (3) objective choice of type of weight decay terms (or regularisers); (4) on-line techniques for optimising weight decay (or regularisation constant) magnitude; (5) a measure of the effective number of well-determined parameters in a model; (6) quantified estimates of the error bars on network parameters and on network output. In the case of classification models, it is shown that the careful incorporation of error bar information into a classifier's predictions yields improved performance. Comparisons of the inferences of the Bayesian framework with more traditional cross-validation methods help detect poor underlying assumptions in learning models. The relationship of the Bayesian learning framework to 'active learning' is examined. Objective functions are discussed which measure the expected informativeness of candidate data measurements, in the context of both interpolation and classification problems. The concepts and methods described in this thesis are quite general and will be applicable to other data modelling problems whether they involve regression, classification or density estimation.

605 citations


Journal Article
TL;DR: 'Ockham's razor', the ad hoc principle enjoining the greatest possible simplicity in theoretical explanations, is presently shown to be justifiable as a consequence of Bayesian inference.
Abstract: 'Ockham's razor', the ad hoc principle enjoining the greatest possible simplicity in theoretical explanations, is presently shown to be justifiable as a consequence of Bayesian inference; Bayesian analysis can, moreover, clarify the nature of the 'simplest' hypothesis consistent with the given data. By choosing the prior probabilities of hypotheses, it becomes possible to quantify the scientific judgment that simpler hypotheses are more likely to be correct. Bayesian analysis also shows that a hypothesis with fewer adjustable parameters intrinsically possesses an enhanced posterior probability, due to the clarity of its predictions.

518 citations


Journal ArticleDOI
TL;DR: The purpose of this article is to extend the local structure in the specification of a discrete probability model for fast and efficient computation, thereby paving the way for exploiting probability-based models as parts of realistic systems for planning and decision support.
Abstract: A scheme is presented for modeling and local computation of exact probabilities, means, and variances for mixed qualitative and quantitative variables. The models assume that the conditional distribution of the quantitative variables, given the qualitative, is multivariate Gaussian. The computational architecture is set up by forming a tree of belief universes, and the calculations are then performed by local message passing between universes. The asymmetry between the quantitative and qualitative variables sets some additional limitations for the specification and propagation structure. Approximate methods when these are not appropriately fulfilled are sketched. It has earlier been shown how to exploit the local structure in the specification of a discrete probability model for fast and efficient computation, thereby paving the way for exploiting probability-based models as parts of realistic systems for planning and decision support. The purpose of this article is to extend this computational s...

497 citations


Journal ArticleDOI
TL;DR: This paper illustrates how the Gibbs sampler approach to Bayesian calculation avoids these difficulties and leads to straightforwardly implemented procedures, even for apparently very complicated model forms.
Abstract: Constrained parameter problems arise in a wide variety of applications, including bioassay, actuarial graduation, ordinal categorical data, response surfaces, reliability development testing, and variance component models. Truncated data problems arise naturally in survival and failure time studies, ordinal data models, and categorical data studies aimed at uncovering underlying continuous distributions. In many applications both parameter constraints and data truncation are present. The statistical literature on such problems is very extensive, reflecting both the problems’ widespread occurrence in applications and the methodological challenges that they pose. However, it is striking that so little of this applied and theoretical literature involves a parametric Bayesian perspective. From a technical viewpoint, this perhaps is not difficult to understand. The fundamental tool for Bayesian calculations in typical realistic models is (multidimensional) numerical integration, which often is problem...

Journal ArticleDOI
TL;DR: In this paper, the Gibbs sampler is used to perform a fully Bayesian analysis of linear and nonlinear population models for a variety of population models using the Gibbs sampling algorithm.
Abstract: : A fully Bayesian analysis of linear and nonlinear population models has previously been unavailable, as a consequence of the seeming impossibility of performing the necessary numerical Integrations in the complex multi- parameter structures typically arising in such models It is demonstrated that, for a variety of linear and nonlinear population models, a fully Bayesian analysis can be implemented in a straightforward manner using the Gibbs sampler The approach is illustrated with examples involving challenging problems of outliers and mean-variance relationships in population modelling

Journal ArticleDOI
TL;DR: These formulas incorporate random testing results, information about the input distribution; and prior assumptions about the probability of failure of the software and include Bayesian prior assumptions.
Abstract: Formulas for estimating the probability of failure when testing reveals no errors are introduced. These formulas incorporate random testing results, information about the input distribution; and prior assumptions about the probability of failure of the software. The formulas are not restricted to equally likely input distributions, and the probability of failure estimate can be adjusted when assumptions about the input distribution change. The formulas are based on a discrete sample space statistical model of software and include Bayesian prior assumptions. Reusable software and software in life-critical applications are particularly appropriate candidates for this type of analysis. >

Journal ArticleDOI
TL;DR: Two approaches to relative risk estimation are described and compared, including an empirical Bayes approach that uses a technique of penalized log-likelihood maximization and an innovative stochastic simulation technique called the Gibbs sampler.
Abstract: This paper reviews methods for mapping geographical variation in disease incidence and mortality. Recent results in Bayesian hierarchical modelling of relative risk are discussed. Two approaches to relative risk estimation, along with the related computational procedures, are described and compared. The first is an empirical Bayes approach that uses a technique of penalized log-likelihood maximization; the second approach is fully Bayesian, and uses an innovative stochastic simulation technique called the Gibbs sampler. We chose to map geographical variation in breast cancer and Hodgkin's disease mortality as observed in all the health care districts of Sardinia, to illustrate relevant problems, methods and techniques.

Journal ArticleDOI
TL;DR: It is shown that if the observed difference is the true one, the probability of repeating a statistically significant result, the 'replication probability', is substantially lower than expected.
Abstract: It is conventionally thought that a small p-value confers high credibility on the observed alternative hypothesis, and that a repetition of the same experiment will have a high probability of resulting again in statistical significance. It is shown that if the observed difference is the true one, the probability of repeating a statistically significant result, the 'replication probability', is substantially lower than expected. The reason for this is a mistake that generates other seeming paradoxes: the interpretation of the post-trial p-value in the same way as the pre-trial alpha error. The replication probability can be used as a frequentist counterpart of Bayesian and likelihood methods to show that p-values overstate the evidence against the null hypothesis.

Posted Content
01 Jan 1992
TL;DR: Bayesian analysis of the model using noninformative and informative prior probability densities is provided which extends and generalizes results obtained by Winkler (1981) and compared with non-Bayesian methods of combining forecasts relying explicitly on a statistical model for the individual forecasts.
Abstract: This paper addresses issues such as: Does it always pay to combine individual forecasts of a variable? Should one combine an unbiased forecast with one that is heavily biased? Should one use optimal weights as suggested by Bates and Granger over twenty years ago? A simple model which accounts for the main features of individual forecasts is put forward. Bayesian analysis of the model using noninformative and informative prior probability densities is provided which extends and generalizes results obtained by Winkler (1981) and compared with non-Bayesian methods of combining forecasts relying explicitly on a statistical model for the individual forecasts. It is shown that in some instances it is sensible to use a simple average of individual forecasts instead of using Bates and Granger type weights. Finally, model uncertainty is considered and the issue of combining different models for individual forecasts is addressed.

Journal ArticleDOI
TL;DR: In this paper, a simple model which accounts for the main features of individual forecasts is put forward, and a Bayesian analysis of the model using noninformative and informative prior probability densities is provided.
Abstract: This paper addresses issues such as: Does it always pay to combine individual forecasts of a variable? Should one combine an unbiased forecast with one that is heavily biased? Should one use optimal weights as suggested by Bates and Granger over twenty years ago? A simple model which accounts for the main features of individual forecasts is put forward. Bayesian analysis of the model using noninformative and informative prior probability densities is provided which extends and generalizes results obtained by Winkler (1981) and compared with non-Bayesian methods of combining forecasts relying explicitly on a statistical model for the individual forecasts. It is shown that in some instances it is sensible to use a simple average of individual forecasts instead of using Bates and Granger type weights. Finally, model uncertainty is considered and the issue of combining different models for individual forecasts is addressed.

Journal ArticleDOI
TL;DR: In this paper, a family of multivariate dynamic generalized linear models is introduced as a general framework for the analysis of time series with observations from the exponential family, and a different approach to filtering and smoothing is chosen in this article.
Abstract: A family of multivariate dynamic generalized linear models is introduced as a general framework for the analysis of time series with observations from the exponential family. Besides common conditionally Gaussian models, this article deals with univariate models for counted and binary data and, as the most interesting multivariate case, models for nonstationary multicategorical time series. For univariate responses, a related yet different class of models has been introduced in a Bayesian setting by West, Harrison and Migon. Assuming conjugate prior-posterior distributions for the natural parameter of the exponential family, they derive an approximate filter for estimation of time-varying states or parameters. However, their method raises some problems; in particular, in extending it to the multivariate case. A different approach to filtering and smoothing is chosen in this article. To avoid a full Bayesian analysis based on numerical integration, which becomes computationally critical for higher...

Journal ArticleDOI
TL;DR: This paper develops and implements a fully Bayesian approach to meta-analysis, in which uncertainty about effects in distinct but comparable studies is represented by an exchangeable prior distribution, along with a parametrization that allows a unified approach to deal easily with both clinical trial and case-control study data.
Abstract: This paper develops and implements a fully Bayesian approach to meta-analysis, in which uncertainty about effects in distinct but comparable studies is represented by an exchangeable prior distribution. Specifically, hierarchical normal models are used, along with a parametrization that allows a unified approach to deal easily with both clinical trial and case-control study data. Monte Carlo methods are used to obtain posterior distributions for parameters of interest, integrating out the unknown parameters of the exchangeable prior or ‘random effects’ distribution. The approach is illustrated with two examples, the first involving a data set on the effect of beta-blockers after myocardial infarction, and the second based on a classic data set comprising 14 case-control studies on the effects of smoking on lung cancer. In both examples, rather different conclusions from those previously published are obtained. In particular, it is claimed that widely used methods for meta-analysis, which involve complete pooling of ‘O-E’ values, lead to understatement of uncertainty in the estimation of overall or typical effect size.

Book ChapterDOI
01 Jan 1992
TL;DR: It is shown that Bayesian inference from data modeled by a mixture distribution can feasibly be performed via Monte Carlo simulation, and the true Bayesian predictive distribution is exhibited, implicitly integrating over the entire underlying parameter space.
Abstract: It is shown that Bayesian inference from data modeled by a mixture distribution can feasibly be performed via Monte Carlo simulation. This method exhibits the true Bayesian predictive distribution, implicitly integrating over the entire underlying parameter space. An infinite number of mixture components can be accommodated without difficulty, using a prior distribution for mixing proportions that selects a reasonable subset of components to explain any finite training set. The need to decide on a “correct” number of components is thereby avoided. The feasibility of the method is shown empirically for a simple classification task.

Journal ArticleDOI
TL;DR: In this paper, a Bayesian alternative to Kriging is developed, which permits temporal and spatial modeling to be done in a convenient and flexible way, at the same time model misspecifications can be corrected by additional data if and when it becomes available, and past data may be used in a systematic way to fit model parameters.

Book ChapterDOI
TL;DR: A new method for the detection and measurement of a periodic signal in a data set when the authors have no prior knowledge of the existence of such a signal or of its characteristics is presented, applicable to data consisting of the locations or times of individual events.
Abstract: We present a new method for the detection and measurement of a periodic signal in a data set when we have no prior knowledge of the existence of such a signal or of its characteristics. It is applicable to data consisting of the locations or times of individual events. To address the detection problem, we use Bayes’ theorem to compare a constant rate model for the signal to models with periodic structure. The periodic models describe the signal plus background rate as a stepwise distribution in m bins per period, for various values of m. The Bayesian posterior probability for a periodic model contains a term which quantifies Ockham’s razor, penalizing successively more complicated periodic models for their greater complexity even though they are assigned equal prior probabilities. The calculation thus balances model simplicity with goodness-of-fit, allowing us to determine both whether there is evidence for a periodic signal, and the optimum number of bins for describing the structure in the data. Unlike the results of traditional “frequentist” calculations, the outcome of the Bayesian calculation does not depend on the number of periods examined, but only on the range examined. Once a signal is detected, we again use Bayes’ theorem to estimate the frequency of the signal. The probability density for the frequency is inversely proportional to the multiplicity of the binned events and is thus maximized for the frequency leading to the binned event distribution with minimum combinatorial entropy. The method is capable of handling gaps in the data due to intermittent observing or dead time.

Journal ArticleDOI
Irving John Good1
TL;DR: In this paper, various compromises that have occurred between Bayesian and non-Bayesian methods are reviewed and a citation is provided that discusses the inevitability of compromises within the Bayesian approach.
Abstract: Various compromises that have occurred between Bayesian and non-Bayesian methods are reviewed. (A citation is provided that discusses the inevitability of compromises within the Bayesian approach.) One example deals with the masses of elementary particles, but no knowledge of physics will be assumed.

Book ChapterDOI
01 Jan 1992
TL;DR: In this paper, the authors compare the performance of Bayesian and frequentist methods for astrophysics problems using the Poisson distribution, including the analysis of on/off measurements of a weak source in a strong background.
Abstract: The “frequentist” approach to statistics, currently dominating statistical practice in astrophysics, is compared to the historically older Bayesian approach, which is now growing in popularity in other scientific disciplines, and which provides unique, optimal solutions to well-posed problems. The two approaches address the same questions with very different calculations, but in simple cases often give the same final results, confusing the issue of whether one is superior to the other. Here frequentist and Bayesian methods are applied to problems where such a mathematical coincidence does not occur, allowing assessment of their relative merits based on their performance, rather than philosophical argument. Emphasis is placed on a key distinction between the two approaches: Bayesian methods, based on comparisons among alternative hypotheses using the single observed data set, consider averages over hypotheses; frequentist methods, in contrast, average over hypothetical alternative data samples and consider hypothesis averaging to be irrelevant. Simple problems are presented that magnify the consequences of this distinction to where common sense can confidently judge between the methods. These demonstrate the irrelevance of sample averaging, and the necessity of hypothesis averaging, revealing frequentist methods to be fundamentally flawed. Bayesian methods are then presented for astrophysically relevant problems using the Poisson distribution, including the analysis of “on/off” measurements of a weak source in a strong background. Weaknesses of the presently used frequentist methods for these problems are straightforwardly overcome using Bayesian methods. Additional existing applications of Bayesian inference to astrophysical problems are noted.

Journal ArticleDOI
TL;DR: In this paper, Bayesian analyses of traditional normal-mixture models for classification and discrimination are discussed, which involves application of an iterative resampling approach to Monte Carlo inference, commonly called Gibbs sampling.
Abstract: We discuss Bayesian analyses of traditional normal-mixture models for classification and discrimination. The development involves application of an iterative resampling approach to Monte Carlo inference, commonly called Gibbs sampling, and demonstrates routine application. We stress the benefits of exact analyses over traditional classification and discrimination techniques, including the ease with which such analyses may be performed in a quite general setting, with possibly several normal-mixture components having different covariance matrices, the computation of exact posterior classification probabilities for observed data and for future cases to be classified, and posterior distributions for these probabilities that allow for assessment of second-level uncertainties in classification.

Journal ArticleDOI
TL;DR: It is concluded that little progress has been made on prediction of the secondary structure of proteins given their primary sequence, despite the application of a variety of sophisticated algorithms such as neural networks, and that further advances will require a better understanding of the relevant biophysics.

Journal ArticleDOI
TL;DR: In this article, the posterior expectation of the Euclidean distance between the estimates and the parameters is minimized by matching the first two moments of the histogram of the estimates, and the posterior expectations of the first 2 moments of histograms of the parameters.
Abstract: Bayesian techniques are widely used in these days for simultaneous estimation of several parameters in compound decision problems. Often, however, the main objective is to produce an ensemble of parameter estimates whose histogram is in some sense close to the histogram of population parameters. This is for example the situation in subgroup analysis, where the problem is not only to estimate the different components of a parameter vector, but also to identify the parameters that are above, and the others that are below a certain specified cutoff point. We have proposed in this paper Bayes estimates in a very general context that meet this need. These estimates are obtained by matching the first two moments of the histogram of the estimates, and the posterior expectations of the first two moments of the histogram of the parameters, and minimizing, subject to these conditions, the posterior expectation of the Euclidean distance between the estimates and the parameters. Several applications of the m...

Journal ArticleDOI
TL;DR: In this article, the validity of posterior probability statements follows from probability calculus when the likelihood is the density of the observations, and a more intuitive definition of validity is introduced, based on coverage of posterior sets.
Abstract: SUMMARY The validity of posterior probability statements follows from probability calculus when the likelihood is the density of the observations. To investigate other cases, a second, more intuitive definition of validity is introduced, based on coverage of posterior sets. This notion of validity suggests that the likelihood must be the density of a statistic, not necessarily sufficient, for posterior probability statements to be valid. A convenient numerical method is proposed to invalidate the use of certain likelihoods for Bayesian analysis. Integrated, marginal, and conditional likelihoods, derived to avoid nuisance parameters, are also discussed.

Proceedings Article
Subutai Ahmad1, Volker Tresp1
30 Nov 1992
TL;DR: It is shown how to obtain closed-form approximations to the Bayesian solution using Gaussian basis function networks and validated on a complex task (3D hand gesture recognition) to discuss Bayesian techniques for extracting class probabilities given partial data.
Abstract: In visual processing the ability to deal with missing and noisy information is crucial. Occlusions and unreliable feature detectors often lead to situations where little or no direct information about features is available. However the available information is usually sufficient to highly constrain the outputs. We discuss Bayesian techniques for extracting class probabilities given partial data. The optimal solution involves integrating over the missing dimensions weighted by the local probability densities. We show how to obtain closed-form approximations to the Bayesian solution using Gaussian basis function networks. The framework extends naturally to the case of noisy features. Simulations on a complex task (3D hand gesture recognition) validate the theory. When both integration and weighting by input densities are used, performance decreases gracefully with the number of missing or noisy features. Performance is substantially degraded if either step is omitted.

Journal ArticleDOI
TL;DR: In this article, the problem of sample size determination in the context of Bayesian analysis, decision theory and quality management is considered, and exact solutions for determining the sample sizes when preset precision conditions are imposed on commonly used criteria such as posterior variance, Bayes risk and expected value of sample information are presented.
Abstract: The problem of sample size determination in the context of Bayesian analysis, decision theory and quality management is considered. For the familiar, and practically important, parameter of a binomial distribution with a beta prior, we present complete and exact solutions for determining the sample sizes when preset precision conditions are imposed on commonly used criteria such as posterior variance, Bayes risk and expected value of sample information. The results obtained here also permit a unifying treatment of several sample size problems of practical interest and an example shows how they can be used in a managerial situation. A computer program for a personal computer handles all computational complexities and is available upon request.