scispace - formally typeset
Search or ask a question

Showing papers on "Bayes' theorem published in 1981"


Journal ArticleDOI
TL;DR: In this paper, the authors illustrate Bayesian and empirical Bayesian techniques that can be used to summarize the evidence in such data about differences among treatments, thereby obtaining improved estimates of the treatment effect in each experiment, including the one having the largest observed effect.
Abstract: Many studies comparing new treatments to standard treatments consist of parallel randomized experiments. In the example considered here, randomized experiments were conducted in eight schools to determine the effectiveness of special coaching programs for the SAT. The purpose here is to illustrate Bayesian and empirical Bayesian techniques that can be used to help summarize the evidence in such data about differences among treatments, thereby obtaining improved estimates of the treatment effect in each experiment, including the one having the largest observed effect. Three main tools are illustrated: 1) graphical techniques for displaying sensitivity within an empirical Bayes framework, 2) simple simulation techniques for generating Bayesian posterior distributions of individual effects and the largest effect, and 3) methods for monitoring the adequacy of the Bayesian model specification by simulating the posterior predictive distribution in hypothetical replications of the same treatments in the same eig...

263 citations


Journal ArticleDOI
TL;DR: The paper shows that on this Bayesian basis it is possible to build a consistent theory of system identification and considers problems of one-shot and real-time identification, estimation and prediction in closed control loop, redundant and unidentifiable parameters, time-varying parameters and adaptivity.

234 citations


Journal ArticleDOI
TL;DR: In this paper, a stochastic process is defined whose sample paths may be assumed to be either increasing hazard rates or decreasing hazard rates by properly choosing the parameter functions of the process.
Abstract: : It is suggested that problems in a reliability context may be handled by a Bayesian non-parametric approach. A stochastic process is defined whose sample paths may be assumed to be either increasing hazard rates or decreasing hazard rates by properly choosing the parameter functions of the process. The posterior distribution of the hazard rates are derived for both exact and censored data. Bayes estimates of hazard rates,c.d.f.'s, densities, and means, are found under squared error type loss functions. Some simulation is done and estimates graphed to better understand the estimators. Finally, estimates of the c.d.f. from some data in a paper by Kaplan and Meier are constructed. (Author)

227 citations


Journal ArticleDOI
TL;DR: It is shown that empirical Bayes procedures are really non-Bayesian, asymptotically optimal, classical procedures for mixtures.
Abstract: A Bayesian approach is given for various kinds of empirical Bayes problems. In particular it is shown that empirical Bayes procedures are really non-Bayesian, asymptotically optimal, classical procedures for mixtures. In some situations these procedures are Bayes with respect to some prior and in other situations, there is no prior for which they are Bayes. Several examples of these concepts are given as well as a general theory showing the difference between an empirical Bayes model and a Bayes empirical Bayes model.

203 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that if the interval is small (approximately two standard deviations wide) then the Bayes rule against a two point prior is the unique minimax estimator under squared error loss.
Abstract: The problem of estimating a normal mean has received much attention in recent years. If one assumes, however, that the true mean lies in a bounded interval, the problem changes drastically. In this paper we show that if the interval is small (approximately two standard deviations wide) then the Bayes rule against a two point prior is the unique minimax estimator under squared error loss. For somewhat wider intervals we also derive sufficient conditions for minimaxity of the Bayes rule against a three point prior.

189 citations


Journal ArticleDOI
TL;DR: It is argued here that, if all variables in the model are random, then Bayes' theorem provides the logical link between the data and the unobserved latent variables.
Abstract: The term posterior analysis is used in this paper to refer to methods of drawing inferences about the latent variables in factor analysis after the model has been fitted. In particular with the problem of locating each individual in the latent space on the basis of the values of the observed variables. This problem has been traditionally treated by determining factor scores. It is argued here that, if all variables in the model are random, then Bayes' theorem provides the logical link between the data and the unobserved latent variables. Viewed in this perspective the indeterminacy of factor scores is simply an expression of the fact that the latent variables are still random variables after the manifest variables have been observed. The name, factor scores, can then reasonably be given to the location parameters of the posterior distributions. The paper is primarily expository and it contains no new mathematics. Its concern is with the logical framework within which the analysis should be carried out and interpreted.

83 citations


Journal ArticleDOI
TL;DR: In this paper, confirmatory factor analysis is considered from a Bayesian viewpoint, in which prior information on parameter is incorporated in the analysis, and an iterative algorithm is developed to obtain the Bayes estimates.
Abstract: Confirmatory factor analysis is considered from a Bayesian viewpoint, in which prior information on parameter is incorporated in the analysis. An iterative algorithm is developed to obtain the Bayes estimates. A numerical example based on longitudinal data is presented. A simulation study is designed to compare the Bayesian approach with the maximum likelihood method.

79 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the existence of the asymptotically invariant sequence of probabilities in the hypothesis of the Hunt-Stein theorem is equivalent to amenability, a condition that has been much studied by functional analysts.
Abstract: A number of conditions on groups have appeared in the literature of invariant statistical models in connection with minimaxity, approximation of invariant Bayes priors by proper priors, the relationship between Bayesian and classical inference, ergodic theorems, and other matters. In the last decade, rapid development has occurred in the field and many of these conditions are now known to be equivalent. We survey the subject, make the equivalences explicit, and list some groups of statistical interest which do, and also some which do not, have these properties. In particular, it is shown that the existence of the asymptotically invariant sequence of probabilities in the hypothesis of the Hunt-Stein theorem is equivalent to amenability, a condition that has been much studied by functional analysts.

64 citations


Journal ArticleDOI
TL;DR: In this paper, an approach to unsupervised pattern classifiation is discussed, based on an approximation of the probability densities of each class under the assumption that the input patterns are of a normal mixture.
Abstract: In this paper, an approach to unsupervised pattern classifiation is discussed. The classification scheme is based on an approximation of the probability densities of each class under the assumption that the input patterns are of a normal mixture. The proposed technique for identifying the mixture does not require prior information. The description of the mixture in terms of convexity allows to determine, from a totally unlabeled set of samples, the number of components and, for each of them, approximate values of the mean vector, the covariance matrix, and the a priori probability. Discriminant functions can then be constructed. Computer simulations show that the procedure yields decision rules whose performances remain close to the optimum Bayes minimum error-rate, while involving only a small amount of computation.

60 citations


Journal ArticleDOI
01 Apr 1981
TL;DR: Pattern recognition procedures based on the Cesaro mean of orthogonal series are presented and their Bayes risk consistency is established.
Abstract: Pattern recognition procedures based on the Cesaro mean of orthogonal series are presented and their Bayes risk consistency is established. No restrictions are put on the class conditional densities.

51 citations


Journal ArticleDOI
TL;DR: The optimum fixed interval smoothing problem is solved using a Bayesian approach, assuming that the signal is Markov and is corrupted by independent noise (not necessarily additive) and a recursive algorithm to compute the a posteriori smoothed density is obtained.
Abstract: The optimum fixed interval smoothing problem is solved using a Bayesian approach, assuming that the signal is Markov and is corrupted by independent noise (not necessarily additive). A recursive algorithm to compute the a posteriori smoothed density is obtained. Using this recursive algorithm, the smoothed estimate of a binary Markov signal corrupted by an independent noise in a nonlinear manner is determined demonstrating that the Bayesian approach presented in this paper is not restricted to the Gauss-Markov problem.

Journal ArticleDOI
TL;DR: In this paper, a Bayesian approach is adopted, and natural conjugate prior distributions are assumed, and the asymptotically pointwise optimal (A.P.O.) procedure continues sampling until the posterior variance of a one-parameter exponential family of distributions with squared error loss for estimation error and a cost c>0 for each of an i.i.d. sequence of potential observations.
Abstract: The problem considered is sequential estimation of the mean θ of a one-parameter exponential family of distributions with squared error loss for estimation error and a cost c>0 for each of an i.i.d. sequence of potential observations X 1, X 2,...A Bayesian approach is adopted, and natural conjugate prior distributions are assumed. For this problem, the asymptotically pointwise optimal (A.P.O.) procedure continues sampling until the posterior variance of θ is less than c(r0+n), where n is the sample size and r 0 is the fictitous sample size implicit in the conjugate prior distribution. It is known that the A.P.O. procedure is Bayes risk efficient, under mild integrability conditions. In fact, the Bayes risk of both the optimal and A.P.O. procedures are asymptotic to 2V 0 √c, as c→0, where V 0 is the prior expectation of the standard deviation of X 1 given θ. Here the A.P.O. rule is shown to be asymptotically non-deficient, under stronger regularity conditions: that is, the difference between the Bayes risk of the A.P.O. rule and the Bayes risk of the optimal procedure is of smaller order of magnitude than c, the cost of a single observation, as c→0. The result is illustrated in the exponential and Bernoulli cases, and extended to the case of a normal distribution with both the mean and variance unknown.

Journal ArticleDOI
TL;DR: In this paper, a quasi-Bayes approach is proposed to estimate the failure intensity of a non-homogeneous Poisson process at the time of failure n. The proposed estimate has the qualitative properties one anticipates from the ordinary Bayes estimate, but it is easy to compute.
Abstract: A non-homogeneous Poisson process has empirically been shown to be useful in tracking the reliability growth of a system as it undergoes development. It is of interest to estimate the failure intensity of this model at the time of failure n. The maximum likelihood estimate is known, but it is desirable to have a Bayesian estimate to allow for input of prior information. Since the ordinary Bayes approach appears to be mathematically intractable, a quasi-Bayes approach is taken. The proposed estimate has the qualitative properties one anticipates from the ordinary Bayes estimate, but it is easy to compute. A numerical example illustrates the Bayesian character of the proposed estimate. A simulation study shows that the proposed estimate, when considered in the classical framework, generally has smaller r.m.s. error than the maximum likelihood estimate.

Journal ArticleDOI
TL;DR: Application of Bayes' theorem using data collected provides an insight based upon probabilities and odds in the way preoperative conditions and operative results affect the ultimate treatment result.

Journal ArticleDOI
TL;DR: The Bayes population assignment of x and Tx are shown to be equivalent for a compression matrix T explicitly calculated as a function of the means and covariances of the given populations.

Journal ArticleDOI
TL;DR: In this article, the determination of a stopping rule for the detection of the time of an increase in the success probability of a sequence of independent Bernoulli trials is discussed, and the results indicate that the detection procedure is quite effective.

Journal ArticleDOI
TL;DR: The authors relaxes the conventional subjective probability setup by allowing the $\sigma$-algebra on which probabilities are defined to be subjective along with the probability measure, and characterizes the individual's selection of a probability domain as the outcome of a decision process.
Abstract: This paper relaxes the conventional subjective probability setup by allowing the $\sigma$-algebra on which probabilities are defined to be subjective along with the probability measure. First, the role of the probability domain in existing statistical decision theory is examined. Then the existing theory is extended by characterizing the individual's selection of a probability domain as the outcome of a decision process.

Journal ArticleDOI
TL;DR: In this paper, a Bayesian approach to nonlinear regression is implemented and evaluated in the light of practical as well as theoretical considerations, and numerical integration methods and the calculation of confidence regions using the posterior distributions are given.
Abstract: A traditional approach to parameter estimation and hypothesis testing in nonlinear models is based on least squares procedures. Error analysis depends on large-sample theory; Bayesian analysis is an alternative approach that avoids substantial errors which could result frorn this dependence. This communication is concerned with the implementation of the Bayesian approach as an alternative to least squares nonlinear regression. Special attention is given to the numerical evaluation of multiple integrals and to the behavior of the parameter estimators and their estimated covariances. The Bayesian approach is evaluated in the light of practical as well as theoretical considerations. 1. Inttoduction The traditional approach to the statistical analysis of nonlinear models is first to use some numerical method to minimize the sum-of-squares objective function in order to obtain least squares estimators of the parameters (Draper and Smith, 1966, pp. 267-275; Nelder and Mead, 1965), and then to apply linear regression theory to the linear part of the Taylor series approximation of the model expanded about these estimators in order to obtain the asymptotic covariance matrix (Bard, 1974, pp. 176-179). The distributions of the estimators obtained in this manner are known only in the limit as the sample size approaches infinity (Jennrich, 1969). Hence, analyses based on these statistics may be inappropriate for small-sample problems, such as those arising in pharmacokinetics (Wagner, 1975). An alternative approach to the statistical analysis of nonlinear models is to utilize methods based on Bayes's theorem (Box and Tiao, 1973, pp. 1-73). Parameters are regarded as random variables rather than as unknown constants. If a nonlinear model with known error distribution is assumed, a-correct probability analysis follows and asymptotic theory is not involved. In this communication a Bayesian approach to nonlinear regression is implemented and evaluated. Particular attention is given to numerical integration methods and the calculation of confidence regions using the posterior distributions.

Journal ArticleDOI
TL;DR: For the p -variate Poisson mean, under the sum of weighted squared error losses, weights being reciprocals of variances, a class of proper Bayes minimax estimates dominating the usual estimate, namely the sample mean is produced as discussed by the authors.

Journal ArticleDOI
TL;DR: Pattern recognition procedures derived from a nonparametric estimate of multivariate probability density functions using the orthogonal Hermite system are examined to find the convergence rate of the mean integrated square error.
Abstract: Pattern recognition procedures derived from a nonparametric estimate of multivariate probability density functions using the orthogonal Hermite system are examined. For sufficiently regular densities, the convergence rate of the mean integrated square error (MISE) is O(n^{-l+\epsilon}) , \epsilon >0 , where n is the number of observations and is independent of the dimension. As a consequence, the rate at which the probability of misclassification converges to the Bayes probability of error as the length n of the learning sequence tends to infinity is also independent of the dimension of the class densities and equals O(n^{-1/2+ \delta}), \delta >O .

01 Dec 1981
TL;DR: The authors argued that both averaging and conservatism in the Bayesian task occur because subjects produce their judgments by using an adjustment strategy that is qualitatively equivalent to averaging, and two experiments were presented that support this view by showing qualitative errors in the direction of revisions in Bayesian inference that are well-accounted for by the simple adjustment strategy.
Abstract: : Two empirically well supported research findings in the judgment literature are that (1) human judgments often appear to follow an averaging rule, and (2) judgments in Bayesian inference tasks are usually conservative relative to optimal judgments. This paper argues that both averaging and conservatism in the Bayesian task occur because subjects produce their judgments by using an adjustment strategy that is qualitatively equivalent to averaging. Two experiments are presented that support this view by showing qualitative errors in the direction of revisions in the Bayesian task that are well-accounted for by the simple adjustment strategy. Two additional results are also discussed: (1) a tendency for subjects in one experiment to evaluate sample evidence according to representativeness rather than according to relative likelihood, and (2) a strong recency effect that may reflect the influence of the internal representation of sample information during the judgment process. (Author)

Journal ArticleDOI
TL;DR: In this article, the reliability function is considered for the inverse Gaussian distribution, and a modified estimator of reliability can be used which is based on the Bayes estimator obtained for? known.
Abstract: Estimation of the reliability function is considered for the inverse Gaussian distribution. When the mean lifetime ? is known, the Jeffreys vague prior and the natural conjugate prior for ? easily yield Bayes estimators of reliability for squared-error loss. If both ? and ? are unknown, the Bayes solution for reliability in a compact form is extremely difficult. In this case a modified estimator of reliability can be used which is based on the Bayes estimator obtained for ? known. The modified estimator is simpler to calculate than the MVUE. Computer simulations indicate that it is more conservative than either the MVUE or the MLE for small mission times, but performs better than the MLE and MVUE for large times.

Journal ArticleDOI
TL;DR: In this article, the asymptotic optimality of the empirical Bayes distribution function created from the Bayes rule relative to the Dirichlet process prior with unknown parameter was established.
Abstract: This paper establishes the asymptotic optimality (in the sense of Robbins) of the empirical Bayes distribution function created from the Bayes rule relative to the Dirichlet process prior with unknown parameter $\alpha(.)$. It will follow that the same result applies to the estimation of the mean of a distribution function.


Journal ArticleDOI
TL;DR: The use of composite null and alternative hypothesis in sequential clinical trials is explored, and it is shown that the SPRT and the Bayes formulations using Bayes odds ratios are equivalent in terms of the weighted likelihood ratio.
Abstract: Sequential methods have become increasingly important for the monitoring of patient safety during clinical trials. However, the typical Wald sequential probability ratio test (SPRT), which compares two simple hypotheses, often presents anomalies which can be attributed to an inadequate representation of the parameter space. The use of composite null and alternative hypothesis in sequential clinical trials is explored and the resulting sequential rules are examined. It is shown that the SPRT and the Bayes formulations using Bayes odds ratios are equivalent in terms of the weighted likelihood ratio (WLR). The WLR is obtained for normal variates when the null hypothesis restricts the mean to (i) an interval and (ii) a point, in each case with complementary alternatives, as well as the one-sided formulation with a half-open interval. Applications to clinical trials include large-samples procedures, the comparative binomial trial and the comparison of survival distributions. Illustrative sequential boundaries are presented and the features of these different formulations are compared and discussed. Mixed sequential rules are considered within the framework for ethical stopping rules proposed by Meier (1979, Clinical Pharmacology and Therapeutics 25, 633--640).

Journal ArticleDOI
TL;DR: In this article, the problem of selecting good populations out of k normal populations is considered in a Bayesian framework under exchangeable normal priors and additive loss functions, and basic approximations to the Bayes rules are discussed.
Abstract: The problem of selecting good populations out of k normal populations is considered in a Bayesian framework under exchangeable normal priors and additive loss functions Some basic approximations to the Bayes rules are discussed These approximations suggest that some well-known classical rules are "approximate" Bayes rules Especially, it is shown that Gupta-type rules are extended Bayes with respect to a family of the exchangeable normal priors for any bounded and additive loss function Furthermore, for a simple loss function, the results of a Monte Carlo comparison of Gupta-type rules and Seal-type rules are presented They indicate that, in general, Gupta-type rules perform better than Seal-type rules

Journal ArticleDOI
TL;DR: A series of examples is given illustrating the use of Bayes' Theorem to express out state of knowledge about the frequency of reactor transient events.


Journal ArticleDOI
TL;DR: In this article, the problem of estimating the variance of a finite population is studied in a Bayesian framework. On the basis of the moderntheoretical approach to sampling from finite populations and the special structure of the likelihood functions Bayes estimators of the population variance are derived.
Abstract: The problem of estimating the variance of a finite population is studied in a Bayesian framework. On the basis of the moderntheoretical approach to sampling from finite populations and the special structure of the likelihood functions Bayes estimators of the population variance are derived. The structure of equivariant estimators is analyzed and Bayes equivariant estimators in the strict and the generalized sense are derived. The Bayes risk efficiency of the classical estimators is studied

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of the choice of an estimator and an experimental design for estimating the response surface in a linear regression model according to Bayes-and minimax-Bayes-optimality.
Abstract: We consider the problem of the choice of an estimator and an experimental design for estimating the response surface in a linear regression model according to Bayes-and minimax-Bayes-optimality The well-known Bayes estimator wrt normally iid observations, conjugate prior distribution and quadratic loss is shown to have a satisfying robustness with regard to the optimality criteria, if we change to larger classes of prior and error distributions (eg all distributions with fixed first and second moments) and more general loss functions (eg monotone and convex functions) Moreover, for this Bayes estimator we can point out a remarkable robustness of'designing against the loss function; eg minimax designs under quadratic loss are also minimax under any convex loss function