scispace - formally typeset
Search or ask a question

Showing papers on "Bayes' theorem published in 1980"


01 Jan 1980
TL;DR: In this article, the authors consider the problem of bad value estimation from a Bayesian viewpoint and compare the performance of M estimators with predictive checking functions for transformation, serial correlation, bad values, and their relation with Bayesian options.
Abstract: : Scientific learning is an iterative process employing Criticism and Estimation. Correspondingly the formulated model factors into two complimentary parts - a predictive part allowing model criticism, and a Bayes posterior part allowing estimation. Implications for significance tests, the theory of precise measurement, and for ridge estimates are considered. Predictive checking functions for transformation, serial correlation, bad values, and their relation with Bayesian options are considered. Robustness is seen from a Bayesian viewpoint and examples are given. For the bad value problem a comparison with M estimators is made. (Author)

768 citations


Journal ArticleDOI
TL;DR: In this article, the authors report results of experiments designed to test the claim of psychologists that expected utility theory does not provide a good descriptive model and the deviation from tested theory is that, in revising beliefs, individuals ignore prior or base-rate information contrary to Bayes rule.
Abstract: Results of experiments designed to test the claim of psychologists that expected utility theory does not provide a good descriptive model are reported. The deviation from tested theory is that, in revising beliefs, individuals ignore prior or base-rate information contrary to Bayes rule. Flaws in the evidence in the psychological literature are noted, an experiment avoiding these difficulties is designed and carried out, and the psychologists' predictions are stated in terms of a more general model. The psychologists' predictions are confirmed for inexperienced or financially unmotivated subjects, but for others the evidence is less clear.

703 citations


Journal ArticleDOI
01 Jul 1980
TL;DR: Predictive checking functions for transformation, serial correlation, bad values, and their relation with Bayesian options are considered, and robustness is seen from a Bayesian viewpoint and examples are given.
Abstract: : Scientific learning is an iterative process employing Criticism and Estimation. Correspondingly the formulated model factors into two complimentary parts - a predictive part allowing model criticism, and a Bayes posterior part allowing estimation. Implications for significance tests, the theory of precise measurement, and for ridge estimates are considered. Predictive checking functions for transformation, serial correlation, bad values, and their relation with Bayesian options are considered. Robustness is seen from a Bayesian viewpoint and examples are given. For the bad value problem a comparison with M estimators is made. (Author)

665 citations


01 Jan 1980
TL;DR: Numerical examples, including seasonal adjustment of time series, are given to illustrate the practical utility of the common-sense approach to Bayesian statistics proposed in this paper.
Abstract: In this paper the likelihood function is considered to be the primary source of the objectivity of a Bayesian method. The necessity of using the expected behavior of the likelihood function for the choice of the prior distribution is emphasized. Numerical examples, including seasonal adjustment of time series, are given to illustrate the practical utility of the common-sense approach to Bayesian statistics proposed in this paper.

259 citations


Journal ArticleDOI
TL;DR: Four classification algorithms-discriminant functions when classifying individuals into two multivariate populations are compared and it is shown that the classification error EPN depends on the structure of a classification algorithm, asymptotic probability of misclassification P¿, and the ratio of learning sample size N to dimensionality p:N/p.
Abstract: This paper compares four classification algorithms-discriminant functions when classifying individuals into two multivariate populations. The discriminant functions (DF's) compared are derived according to the Bayes rule for normal populations and differ in assumptions on the covariance matrices' structure. Analytical formulas for the expected probability of misclassification EPN are derived and show that the classification error EPN depends on the structure of a classification algorithm, asymptotic probability of misclassification P?, and the ratio of learning sample size N to dimensionality p:N/p for all linear DF's discussed and N2/p for quadratic DF's. The tables for learning quantity H = EPN/P? depending on parameters P?, N, and p for four classifilcation algorithms analyzed are presented and may be used for estimating the necessary learning sample size, detennining the optimal number of features, and choosing the type of the classification algorithm in the case of a limited learning sample size.

153 citations


Journal ArticleDOI
TL;DR: In this paper, a multivariate version of the Hoerl-Kennard ridge regression rule is introduced and the choice from among a large class of possible generalizations is guided by Bayesian considerations; the result is implicitly in the work of Lindley and Smith although not actually derived there.
Abstract: A multivariate version of the Hoerl-Kennard ridge regression rule is introduced. The choice from among a large class of possible generalizations is guided by Bayesian considerations; the result is implicitly in the work of Lindley and Smith although not actually derived there. The proposed rule, in a variety of equivalent forms is discussed and the choice of its ridge matrix considered. As well, adaptive multivariate ridge rules and closely related empirical Bayes procedures are presented, these being for the most part formal extensions of certain univariate rules. Included is the Efron-Morris multivariate version of the James-Stein estimator. By means of an appropriate generalization of a result of Morris (see Thisted) the mean square error of these adaptive and empirical Bayes rules are compared.

112 citations


Journal ArticleDOI
TL;DR: In this article, the structural parameters occurring in a credibility formula are replaced by consistent estimators based on data from a collective of similar risks, which is a credibility counterpart of empirical Bayes estimators.
Abstract: A credibility estimator is Bayes in the restricted class of linear estimators and may be viewed as a linear approximation to the (unrestricted) Bayes estimator. When the structural parameters occurring in a credibility formula are replaced by consistent estimators based on data from a collective of similar risks,we obtain an empirical credibility estimator, which is a credibility counterpart of empirical Bayes estimators. Empirical credibility estimators are proposed under various model assumptions, and sufficient conditions for asymptotic optimality are established.

82 citations




Journal ArticleDOI
TL;DR: In this paper, the authors used Bayes' theorem to derive plant specific distributions for the failure rates of components and used these distributions as prior distributions in BOW and modified them using plant specific data.

62 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of mastery decisions and cutoff scores on criterion-referenced tests is considered, and the problem can be formalized as an empirical Bayes problem with decisions rules of a monotone shape.
Abstract: The problem of mastery decisions and optimizing cutoff scores on criterion-referenced tests is considered. This problem can be formalized as an (empirical) Bayes problem with decisions rules of a monotone shape. Next, the derivation of optimal cutoff scores for threshold, linear, and normal ogive loss functions is addressed, alternately using such psychometric models as the classical model, the beta-binomial, and the bivariate normal model. One important distinction made is between decisions with an internal and an external criterion. A natural solution to the problem of reliability and validity analysis of mastery decisions is to analyze with a standardization of the Bayes risk (coefficient delta). It is indicated how this analysis proceeds and how, in a number of cases, it leads to coefficients already known from classical test theory. Finally, some new lines of research are suggested along with other aspects of criterion-referenced testing that can be approached from a decision-theoretic point of view.

Journal ArticleDOI
TL;DR: Monte Carlo results presented here further confirm the relatively good performance of non-parametric Bayes theorem type algorithms compared to parametric (linear and quadratic) algorithms and point out certain procedures which should be used in the selection of the density estimation windows for non- Parametric algorithms to improve their performance.

Journal ArticleDOI
TL;DR: In this article, some simple cases of optimal Bayes designs for linear models with prior information represented in hierarchical form are investigated for linear model with prior-information hierarchically represented in a hierarchical form.
Abstract: SUMMARY Some simple cases of optimal Bayes designs are investigated for linear models with prior information represented in hierarchical form.

DOI
01 Sep 1980
TL;DR: In this paper, a model for assigning the prior probability of an image in Bayes' theorem is proposed, which leads to a very general algorithm for image enhancement, and examples of sharpening blurred photographs show how the success of deconvolution depends on the signal/noise ratio in the degraded images.
Abstract: A model is suggested for assigning the prior probability of an image in Bayes' theorem which leads to a very general algorithm for image enhancement. Examples of sharpening blurred photographs show how the success of deconvolution depends on the signal/noise ratio in the degraded images.

01 May 1980
TL;DR: Several topics that arise in applying Bayesian ideas to inference problems are discussed: the relationship between the probability specification and real-world experiences is explored, and a suggestion is made that zero probabilities are, in a sense, unreasonable.
Abstract: : This paper discusses several topics that arise in applying Bayesian ideas to inference problems. The Bayesian paradigm is first described as an appreciation of the world through probability: probability being expressed in terms of gambles. Various justifications for this view are outlined. The role of models in the specification of probabilities is considered; together with related problems of the size and complexity of the model, robustness and goodness of fit. Some attempt is made to clarify the concept of conditioning in probability statements. The role of the second argument in a probability function is emphasized again in discussion of the likelihood principle. The relationship between the probability specification and real-world experiences is explored and a suggestion is made that zero probabilities are, in a sense, unreasonable. It is pointed out that it is unrealistic to think of probability as necessarily being defined over a sigma-field. The paper concludes with some remarks on two common objections to the Bayesian view. (Author)

Journal ArticleDOI
TL;DR: It is shown that the dilemma of estimating the tax parameter E(thetax = a) for some given a = 0,1,....
Abstract: Let x be a random variable such that, given θ, x is Poisson with mean θ, while θ has an unknown prior distribution G. In many statistical problems one wants to estimate as accurately as possible the parameter E(θǀx = a) for some given a = 0,1,.... If one assumes that G is a Gamma prior with unknown parameters α and β, then the problem is straightforward, but the estimate may not be consistent if G is not Gamma. On the other hand, a more general empirical Bayes estimator will always be consistent but will be inefficient if in fact G is Gamma. It is shown that this dilemma can be more or less resolved for large samples by combining the two methods of estimation.

Journal ArticleDOI
TL;DR: In this article, it was shown that a parametric Bayes model can be approximated by a nonparametric model of the form of a mixture of Dirichlet processes prior, so that the non-parametric prior assigns most of its weight to neighborhoods of the parametric model.
Abstract: Let $\tau$ be a prior distribution over the parameter space $\Theta$ for a given parametric model $P_\theta, \theta \in \Theta$. For the sample space $\mathscr{X}$ (over which $P_\theta$'s are probability measures) belonging to a general class of topological spaces, which include the usual Euclidean spaces, it is shown that this parametric Bayes model can be approximated by a nonparametric Bayes model of the form of a mixture of Dirichlet processes prior, so that (i) the nonparametric prior assigns most of its weight to neighborhoods of the parametric model, and (ii) the Bayes rule for the nonparametric model is close to the Bayes rule for the parametric model in the no-sample case. Moreover, any prior parametric or nonparametric, may be approximated arbitrarily closely by a prior which is a mixture of Dirichlet processes. These results have implications in Bayesian inference.

Journal ArticleDOI
TL;DR: The asymptotic behavior of a Bayes optimal adaptive estimation scheme for a linear discrete-time dynamical system with unknown Markovian noise statistics is investigated.
Abstract: The asymptotic behavior of a Bayes optimal adaptive estimation scheme for a linear discrete-time dynamical system with unknown Markovian noise statistics is investigated. Noise influencing the state equation and the measurement equation is assumed to come from a group of Gaussian distributions having different means and covariances, with transitions from one noise source to another determined by a Markov transition matrix. The transition probability matrix is unknown and can take values only from a finite set. An example is simulated to illustrate the convergence.


Journal ArticleDOI
TL;DR: In this paper, the problem of computing confidence limits for multi-component reliability and availability has been considered and several methods for computing these limits have been proposed, such as the one presented in this paper.
Abstract: The problem of computing reliability and availability and their associated confidence limits for multi-component systems has appeared often in the literature. This problem arises where some or all of the component reliabilities and availabilities are statistical estimates (random variables) from test and other data. The problem of computing confidence limits has generally been considered difficult and treated only on a case-by-case basis. This paper deals with Bayes confidence limits on reliability and availability for a more general class of systems than previously considered including, as special cases, series-parallel and standby systems applications. The posterior distributions obtained are exact in theory and their numerical evaluation is limited only by computing resources, data representation and round-off in calculations. This paper collects and generalizes previous results of the authors and others. The methods presented in this paper apply both to reliability and availability analysis. The conceptual development requires only that system reliability or availability be probabilities defined in terms acceptable for a particular application. The emphasis is on Bayes Analysis and the determination of the posterior distribution functions. Having these, the calculation of point estimates and confidence limits is routine. This paper includes several examples of estimating system reliability and confidence limits based on observed component test data. Also included is an example of the numerical procedure for computing Bayes confidence limits for the reliability of a system consisting of N failure independent components connected in series. Both an exact and a new approximate numerical procedure for computing point and interval estimates of reliability are presented. A comparison is made of the results obtained from the two procedures. It is shown that the approximation is entirely sufficient for most reliability engineering analysis.

Journal ArticleDOI
TL;DR: Results for more highly structured problems, involving specific covariance matrices, show that in some cases increasing correlation between the measurements yields higher values of P cr, androximate expressions are derived relating P cr dimensionality, training sample size and the structure of the underlying probability density.

ReportDOI
01 Mar 1980
TL;DR: The horizontal distance delta (X) = 1/G (F(x)) -x has been shown by Doksum (1974) to be a useful measure of difference, at each x, between the populations defined by continuous distribution functions F(x) and G(x).
Abstract: : The horizontal distance delta (X) =1/G (F(x)) -x has been shown by Doksum (1974) to be a useful measure of difference, at each x, between the populations defined by continuous distribution functions F(x) and G(x).

Journal ArticleDOI
TL;DR: This paper investigates the effect of intraclass correlation among training samples upon the misclassification probabilities of Bayes' procedure whenever the training samples are assumed to be serially correlated.

ReportDOI
01 Nov 1980
TL;DR: Bayes' method has been used to develop a computer code which can be utilized to analyze neutron cross-section data by means of the R-matrix theory.
Abstract: A method is described for determining the parameters of a model from experimental data based upon the utilization of Bayes' theorem. This method has several advantages over the least-squares method as it is commonly used; one important advantage is that the assumptions under which the parameter values have been determined are more clearly evident than in many results based upon least squares. Bayes' method has been used to develop a computer code which can be utilized to analyze neutron cross-section data by means of the R-matrix theory. The required formulae from the R-matrix theory are presented, and the computer implementation of both Bayes' equations and R-matrix theory is described. Details about the computer code and compelte input/output information are given.

Journal ArticleDOI
30 Aug 1980-BMJ
TL;DR: The most useful statistical procedure for assessing diagnostic tests is based on a theorem named after the Rev Thomas Bayes, which has much the same status as any other theorem, such as that of Pythagoras.
Abstract: The most useful statistical procedure for assessing diagnostic tests is based on a theorem named after the Rev Thomas Bayes, who discovered it in about 1760. Its use in the more general field of testing statistical hypotheses has caused a good deal of controversy, and it suffered for many years from the influential criticisms of Sir Ronald Fraser. In the applications to be discussed here the use of the theorem is not controversial and has much the same status as any other theorem, such as that of Pythagoras. The ideas incorporated in it are those familiar to all clinicians making a diagnosis, where the likelihood of disease being present depends not only on the signs and symptoms but also on the frequency of the disease in the community. The latter probability is called the a priori or prior probability, and is familiar to every medical student through the maxim that "common things commonly occur."

Journal ArticleDOI
TL;DR: In this article, for a life test without replacement on M machines, assuming an exponential distribution for failure times, the Bayes sequential procedure for estimating the failure rate was studied, and its asymptotic Bayesian and sampling theory properties were obtained as c1, z9 - 0 and M + jointly.
Abstract: For a life test without replacement on M machines, assuming an exponential distribution for failure times, the Bayes sequential procedure for estimating the failure rate is studied. Estimation error is assumed to be measured by one of a family of loss functions, and the cost of sampling consists of a cost per machine failure c, >, 0 and a cost per unit time c > 0. Assuming a conjugate prior on 9, the Bayes sequential procedure and its asymptotic Bayesian and sampling theory properties are obtained as c1, z9 - 0 and M + jointly.

04 Feb 1980
Abstract: : A model for sequential clinical trials is discussed. Three proposed stopping rules are studied by Monte Carlo for small patient horizons and mathematically for large patient horizons. They are shown to be about equally effective and asymptotically optimal from both Bayesian and frequentist points of view. Their advantage over any fixed sample size rule is emphasized. (Author)

Book ChapterDOI
01 Jan 1980
TL;DR: In this paper, the authors discuss the asymptotic theory of Bayes solutions in estimation and testing when the observations are from a discrete parameter stochastic process and present sufficient conditions for the strong consistency of the Bayes estimators for general loss functions and discuss some recent results for loss functions of quadratic nature.
Abstract: This chapter discusses the asymptotic theory of Bayes solutions in estimation and testing when the observations are from a discrete parameter stochastic process. It presents the fundamental theorem in the asymptotic theory of Bayesian inference, namely, the approach of the posterior density to the normal for discrete parameter stochastic processes. The chapter explains the asymptotic behavior of Bayes estimators for discrete-time stochastic processes. It presents sufficient conditions for the strong consistency of Bayes estimators for general loss functions and discusses some recent results for loss functions of quadratic nature for a sequence of statistical problems that have the Bayesian form. The chapter discusses Bayesian testing and a limit theorem for the n th root of posterior risk. It reviews the behavior of the Bayes posterior risk in testing disjoint hypotheses or hypotheses that are separated by an indifference region in which the losses caused by taking the wrong decision are zero.

Journal ArticleDOI
TL;DR: In this paper, the authors provide explicit solutions to the problem of estimating the arrival rate of a Poisson process using a Bayes sequential approach, where the cost of observation includes both a time cost and an event cost.
Abstract: This paper provides explicit solutions to the problem of estimating the arrival rate $\lambda$ of a Poisson process using a Bayes sequential approach. The loss associated with estimating $\lambda$ by $d$ is assumed to be of the form $(\lambda - d)^2\lambda^{-p}$ and the cost of observation includes both a time cost and an event cost. A discrete time approach is taken in which decisions are made at the end of time intervals having length $t$. Limits of the procedures as $t$ approaches zero are discussed and related to the continuous time Bayes sequential procedure.

Journal ArticleDOI
TL;DR: In this paper, the authors presented a slightly different estimator which is simpler and is also asymptotically optimal with the same rate of convergence, and showed that the estimator is a proper distribution function.
Abstract: Susarla and Van Ryzin exhibited an empirical Bayes estimator of a distribution function $F$ based on randomly right-censored observations. In a later paper they obtained a different estimator which alleviates the weaknesses of their earlier estimator and showed that it is asymptotically optimal with rate of convergence $n^{-1}$. The purpose of this note is to present a slightly different estimator which is simpler and is also asymptotically optimal with the same rate of convergence. Their numerical example is reworked to show that the estimator is a proper distribution function.