scispace - formally typeset
Search or ask a question

Showing papers on "Bayes' theorem published in 1985"


Journal ArticleDOI
TL;DR: Empirical Bayes methods have been shown to be powerful data-analysis tools in recent years as discussed by the authors, and the empirical Bayes model is much richer than either the classical or the ordinary Bayes models and often provides superior estimates of parameters.
Abstract: Empirical Bayes methods have been shown to be powerful data-analysis tools in recent years. The empirical Bayes model is much richer than either the classical or the ordinary Bayes model and often provides superior estimates of parameters. An introduction to some empirical Bayes methods is given, and these methods are illustrated with two examples.

446 citations


Journal ArticleDOI
TL;DR: This paper used information about study features (study contexts, designs, treatments, and subjects) to account for variatization in quantitative research synthesis, and found that the information about these features can be used to improve the quality of experimental data.
Abstract: As interest in quantitative research synthesis grows, investigators increasingly seek to use information about study features—study contexts, designs, treatments, and subjects—to account for variat...

280 citations


Journal ArticleDOI
TL;DR: Methods are illustrated by application to preliminary data from a study aimed at identifying hitherto unsuspected occupational carcinogens, and prefer an approach in which all associations in the data are reported, whether significant or not, followed by a ranking in order of priority for investigation using empirical Bayes techniques.
Abstract: Epidemiologic research often involves the simultaneous assessment of associations between many risk factors and several disease outcomes. In such situations, often designed to generate hypotheses, multiple univariate hypothesis-testing is not an appropriate basis for inference. The number of true positive associations in a collection of many associations can be estimated by comparing the observed distribution of p values for the positive associations to a theoretical uniform distribution, or to the observed distribution of negative associations, or to an empiric randomization distribution. None of these approaches, however, will distinguish the true from the false positive associations. Various criteria for selecting a subset of associations to report are considered by the authors, including Bonferoni adjustment of p values, splitting the sample for searching and testing, Bayesian inference, and decision theory. The authors prefer an approach in which all associations in the data are reported, whether significant or not, followed by a ranking in order of priority for investigation using empirical Bayes techniques. Methods are illustrated by application to preliminary data from a study aimed at identifying hitherto unsuspected occupational carcinogens.

190 citations


Journal ArticleDOI
TL;DR: The classical hypothesis testing in clinical trials involving two treatments is criticized and a Bayesian approach in which sampling stops when the probability that one treatment is the better exceeds a specified value is recommended.
Abstract: This paper concerns interim analysis in clinical trials involving two treatments from the points of view of both classical and Bayesian inference. I criticize classical hypothesis testing in this setting and describe and recommend a Bayesian approach in which sampling stops when the probability that one treatment is the better exceeds a specified value. I consider application to normal sampling analysed in stages and evaluate the gain in average sample number as a function of the number of interim analyses.

98 citations


Journal ArticleDOI
TL;DR: A new method to estimate the auditory brainstem response when the electrical activity from the recording electrodes displays non-stationarity, i.e. varies between low and high levels is described, based on a statistical approach called Bayesian inference.
Abstract: The present paper describes a new method to estimate the auditory brainstem response when the electrical activity from the recording electrodes displays non-stationarity, i.e. varies between low and high levels. The method is based on a statistical approach called Bayesian inference and weights the individual components (here blocks of 250 sweeps) inversely proportional to the level of the noise activity during the recording. Fifty sets of data from 10 consecutive patients obtained during stimulation at high intensity are used to evaluate the difference between the classic averaging and the present method which is called Bayes estimation. In approximately 30% of the cases, a significant all-over improvement is obtained by the new method. The classic averaging technique would here require 50% more sweeps to be taken to obtain the same precision of the ABR estimate, on average. Also the latency and amplitude parameters of the Jv wave complex are evaluated and it is shown that the parameter variance decreases by a factor of approximately 2 by using the Bayes estimation. The new technique is compared with a similar technique recently presented by Hoke et al. (1984) and the differences and similarities are discussed.

96 citations


Journal ArticleDOI
TL;DR: A nonlinear multiple regression analysis program MULTI2(BAYES) was developed for microcomputers that combines the insufficient individual patient data with the pharmacokinetic parameters published in literatures to predict the plasma time course of the patient.
Abstract: A nonlinear multiple regression analysis program MULTI2(BAYES) was developed for microcomputers The Bayesian algorithm which is incorporated in MULTI2 (BAYES) combines the insufficient individual patient data (individual data) with the pharmacokinetic parameters published in literatures (population parameters) to predict the plasma time course of the patient The program is written in the minimum Microsoft BASIC commands alone to be executable on many personal computers without any modification The numbers of parameters to estimate, independent variables and dependent variables are not restricted in use of MULTI2(BAYES) The pharmacokinetic models are defined as one pleases by the user The four nonlinear least squares algorithms, ie Gauss-Newton method, damping Gauss-Newton method, modified Marquardt method by Fletcher and simplex method can be selected at user's option MULTI2(BAYES) calculates the confidence limits of time courses at 95% significant level

82 citations


Journal ArticleDOI
TL;DR: In this article, an empirical Bayes method is used to obtain adjusted rates of cancer mortality that are more stable for use in comparisons among cities and to predict future mortality trends in Missouri cities.
Abstract: An empirical Bayes method is used to obtain adjusted rates of cancer mortality that are more stable for use in comparisons among cities and to predict future mortality trends. The method is illustrated using data on stomach and bladder cancers in Missouri cities. (ANNOTATION)

79 citations


Journal ArticleDOI
17 May 1985-JAMA
TL;DR: The competing-hypotheses heuristic is discussed within the context of diagnostic problem-solving models derived from the literature on medical decision making and clinicopathological conference case records and suggested that the heuristic may be useful as a complement to clinical judgment.
Abstract: Evaluating the same diagnostic information across the plausible competing diagnoses is a practical strategy (ie, heuristic) to guide decision making in the face of uncertainty. The prevalence of use of this competing-hypotheses heuristic by 89 first-year house officers was examined in three simulated patient cases. Results indicated that only a minority (24%) of the house officers selected optimal diagnostic information consistent with this Bayesian heuristic across all three cases. Almost all (97%) of the house officers selecting optimal diagnostic information were able to identify the most probable diagnosis specified by Bayes' theorem, while only a chance number (53%) of house officers selecting nonoptimal information were able to identify the most probable diagnosis. The competing-hypotheses heuristic is discussed within the context of diagnostic problem-solving models derived from the literature on medical decision making and clinicopathological conference case records. It is suggested that the heuristic, which does not necessitate any mathematical calculations, may be useful as a complement to clinical judgment. ( JAMA 1985;253:2858-2862)

78 citations


Book ChapterDOI
10 Jul 1985
TL;DR: The maximum entropy principle with minimum cross-entropy updating, provides a way of making assumptions about the missing specification that minimizes the additional information assumed and thus offers a standard against which the other UISs can be compared.
Abstract: Several different uncertain inference systems (UISs) have been developed for representing uncertainty in rule-based expert systems. Some of these, such as Mycin's Certainty Factors, Prospector, and Bayes' Networks were designed as approximations to probability, and others, such as Fuzzy Set Theory and Dempster-Shafer Belief Functions were not. How different are these UISs in practice, and does it matter which you use? When combining and propagating uncertain information, each UIS must, at least by implication, make certain assumptions about correlations not explicily specified. The maximum entropy principle with minimum cross-entropy updating, provides a way of making assumptions about the missing specification that minimizes the additional information assumed, and thus offers a standard against which the other UISs can be compared. We describe a framework for the experimental comparison of the performance of different UISs, and provide some illustrative results.

73 citations


Book ChapterDOI
10 Jul 1985
TL;DR: This paper is concerned with two theories of probability judgment: the Bayesian theory and the theory of belief functions and illustrates these theories with some simple examples and discusses some of the issues that arise when the authors try to implement them in expert systems.
Abstract: This paper is concerned with two theories of probability judgment: the Bayesian theory and the theory of belief functions. It illustrates these theories with some simple examples and discusses some of the issues that arise when we try to implement them in expert systems. The Bayesian theory is well known; its main ideas go back to the work of Thomas Bayes (1702–1761). The theory of belief functions, often called the Dempster-Shafer theory in the artificial intelligence community, is less well known, but it has even older antecedents; belief-function arguments appear in the work of George Hooper (1640–1723) and James Bernoulli (1654–1705). For elementary expositions of the theory of belief functions, see Shafer (1976, 1985).

46 citations


Journal ArticleDOI
TL;DR: In this paper, a partially-Bayes method is used to improve efficiency in a class of problems in which the number of nuisance parameters increases to infinity, where the parameter of interest is estimated in an asymptotically unbiased way while James-Stein shrinkage is applied to the nuisance parameter estimates.
Abstract: Empirical partially Bayes methods are considered as a means of improving efficiency in a class of problems in which the number of nuisance parameters increases to infinity. In the method used, the parameter of interest is estimated in an asymptotically unbiased way while James-Stein shrinkage is applied to the nuisance parameter estimates. When the shrinkage estimators are carefully chosen, this yields estimators generally more efficient than maximum likelihood. In the models considered, the conditional structure imposed allows construction of a simple estimator which is broadly consistent and efficient.

Journal ArticleDOI
TL;DR: For the exponential life distribution model and any prior distribution for the failure rate parameter, the predictive distribution has a decreasing failure rate as mentioned in this paper, and a Bayes explanation is given of why this is logically reasonable.
Abstract: For the exponential life distribution model and any prior distribution for the failure rate parameter, the predictive distribution has a decreasing failure rate. A Bayes explanation is given of why this is logically reasonable.

Journal ArticleDOI
TL;DR: A new method for automated answer justification is presented that is suitable for use in computer-supported decision aids in medicine which are based on Bayesian classification and seen to produce understandable and clinically plausible explanations.

Journal ArticleDOI
TL;DR: In this paper, a Bayesian probability theory in conjunction with the Poisson process model is used to estimate or predict the inter-arrival times (Tj) for strong earthquakes in the Hellenic Arc.

15 Jul 1985
TL;DR: The theoretical foundation of the model is discussed by introducing Bug Distribution and hypothesis testing (Bayes' decision rules for minimum errors) for classifying and individual into his/her most plausible latent state of knowledge.
Abstract: : A model (called rule space) which permits measuring cognitive skill acquisition, diagnosing cognitive errors, detecting the weaknesses and strengths of knowledge possessed by individuals was introduced earlier. This study further discusses the theoretical foundation of the model by introducing Bug Distribution and hypothesis testing (Bayes' decision rules for minimum errors) for classifying and individual into his/her most plausible latent state of knowledge. The model is illustrated with the domain of fraction arithmetic and compared with the results obtained from a conventional Artificial Intelligence approach. Keywords: Rule space model; Error diagnosis; Bayes' decision rules; Classification; and Latent knowledge.

Journal ArticleDOI
TL;DR: In this model the posterior density of the random variables depends on only a weighted average of the expert's means, with weights that depend on the experts' assessments of previously known quantities.
Abstract: When two or more information sources (“experts”) provide a decision maker with information on two or more random variables, the decision maker using Bayes's rule has an opportunity to (a) update a prior about the random variables and (b) calibrate the experts. (Calibration is the process of adjusting the decision maker's likelihood about the experts' assessments.) This article presents a model for this two-way process and specializes to the case in which the experts' assessment errors have a multivariate normal density. In general, we find that variables which the decision maker and the experts regard as independent a priori will be dependent a posteriori because of dependence in the assessment errors. Formulas for posterior densities are given for the normal model. In this model the posterior density of the random variables depends on only a weighted average of the expert's means, with weights that depend on the experts' assessments of previously known quantities. I also present a special case o...

Journal ArticleDOI
TL;DR: In this article, the authors considered the problem of estimating the vector β of the regression coefficients in a multiple linear regression with a completely unknown and unspecified distribution and the error vector e having a multivariate standard normal distribution.
Abstract: Estimation of the vector β of the regression coefficients in a multiple linear regressionY=Xβ+e is considered when β has a completely unknown and unspecified distribution and the error-vector e has a multivariate standard normal distribution. The optimal estimator for β, which minimizes the overall mean squared error, cannot be constructed for use in practice. UsingX, Y and the information contained in the observation-vectors obtained fromn independent past experiences of the problem, (empirical Bayes) estimators for β are exhibited. These estimators are compared with the optimal estimator and are shown to be asymptotically optimal. Estimators asymptotically optimal with rates nearO(n−1) are constructed.

Journal ArticleDOI
TL;DR: The Maximus, bootstrap, and Bayes methods can be useful in calculating lower s-confidence limits on system reliability using binomial component test data using Monte Carlo simulation.
Abstract: The Maximus, bootstrap, and Bayes methods can be useful in calculating lower s-confidence limits on system reliability using binomial component test data. The bootstrap and Bayes methods use Monte Carlo simulation, while the Maximus method is closed-form. The Bayes method is based on noninformative component prior distributions. The three methods are compared by means of Monte Carlo simulation using 20 simple through moderately complex examples. The simulation was generally restricted to the region of high reliability components. Sample coverages and average interval lengths are both used as performance measures. In addition to insights regarding the adequacy and desirability of each method, the comparison reveals the following regions of superior performance: 1. The Maximus method is generally superior for: a) moderate to large series systems of reliable components with small quantities of test data per component, and b) small series systems of repeated components. 2. The bootstrap method is generally superior for highly reliable and redundant systems. 3. The Bayes method is generally superior for: a) moderate to large series systems of reliable components with moderate to large numbers of component tests, and b) small series systems of reliable non-repeated components.

Journal ArticleDOI
TL;DR: This paper first distinguishes the situation of clusters identified by anecdotal observation from those that emerge from systematic searches, and procedures are described for testing the global null hypothesis of no exposure-disease associations and for estimating the number of true-positive associations.
Abstract: Point-source environmental hazards are often identified by examination of unusual clusters of disease cases. The very large number of potential clusters give rise to the statistical problem of "multiple inference," i.e., the more clusters examined, the greater the risk of "false-positive" associations emerging by chance alone. This paper first distinguishes the situation of clusters identified by anecdotal observation from those that emerge from systematic searches. The latter may or may not include a systematic enumeration of potential causal factors associated with each potential disease cluster. If exposure information is not systematically available, empirical Bayes procedures are suggested as a basis for ranking the observed clusters in order of priority for further investigation. If exposure information is systematically available, empirical Bayes procedures can be used to select associations to report or to rank them in order of priority for confirmation. In addition, procedures are described for testing the global null hypothesis of no exposure-disease associations and for estimating the number of true-positive associations. These approaches are advocated in preference to classical frequentist approaches of multiplying p values by the number of tests performed.


Journal ArticleDOI
TL;DR: In this article, the restricted Bayes and minimax principles of Hodges and Lehmann [8] are applied to the problem of estimating the parameters of a linear model when the error distribution is Gaussian and the prior distribution is not exactly known.
Abstract: The restricted Bayes and minimax principles of Hodges and Lehmann [8] are applied to the problem of estimating the parameters of a linear model when : a) the error distribution is Gaussian and the prior distribution is not exactly known; b) the prior distribution is Gaussian and the given error distribution is not precise. Approximate analytical and numerical solutions are studied.

Journal ArticleDOI
01 Dec 1985
TL;DR: The sampling frequencies obtained using Bayesian decision theory are close to those obtained with other mathematical methods and are also similar to the sampling frequency chosen by the operating authority when monitoring the well.
Abstract: Bayesian decision theory is applied to determine an optimum sampling frequency for chlorides in a public water supply well The well is located approximately one mile downstream of a landfill with an established contaminant plume and has exhibited increasing chloride concentrations A digital computer program is utilized to solve the optimization problem using Bayes' theorem for continuous probability distributions The sampling frequency obtained by this method is compared with the frequency used by the authority operating the public water supply well and to the frequencies estimated by other methods that have been recommended for determining sampling frequency The sampling frequencies obtained using Bayesian decision theory are close to those obtained with other mathematical methods The results are also similar to the sampling frequency chosen by the operating authority when monitoring the well

01 Apr 1985
TL;DR: It is shown that for segmentation problems the optimal Bayesian estimator is the maximizer of the posterior marginals, while for reconstruction tasks, the threshold posterior mean has the best possible performance.
Abstract: A very fruitful approach to the solution of image segmentation and surface reconstruction tasks is their formulation as estimation problems via the use of Markov random field models and Bayes theory. However, the Maximuma Posteriori (MAP) estimate, which is the one most frequently used, is suboptimal in these cases. We show that for segmentation problems the optimal Bayesian estimator is the maximizer of the posterior marginals, while for reconstruction tasks, the threshold posterior mean has the best possible performance. We present efficient distributed algorithms for approximating these estimates in the general case. Based on these results, we develop a maximum likelihood that leads to a parameter-free distributed algorithm for restoring piecewise constant images. To illustrate these ideas, the reconstruction of binary patterns is discussed in detail.

Journal ArticleDOI
TL;DR: The authors used empirical Bayes and bootstrap procedures to develop a measure of precision and interval estimation for Stein's estimator, which they used to evaluate the performance of their estimator.

Journal ArticleDOI
TL;DR: In this paper, the convergence rate of the conditional Bayes risk and its limit distribution was studied for the continuous one-parameter exponential family, where the prior distribution is unknown.
Abstract: Several authors have proposed empirical Bayes tests (EBT) for the continuous one-parameter exponential family for the case that the prior distribution is completely unspecified. They investigated the convergence rate of the (unconditional) Bayes risk, and gave upper bounds for this convergence rate. In this paper it is proposed to study the convergence of the conditional Bayes risk. A method is presented which makes it possible to derive the exact convergence rate of the conditional risk and its limit distribution. Several results are given. Also the question is considered whether monotonizing an empirical Bayes test influences its asymptotic properties.

Journal ArticleDOI
TL;DR: In this paper, two new selection procedures, called nonrandomized and randomized Bayes -P ∗ procdures are defined for selecting a small nonempty subset of k populations which contains the best population.

01 Oct 1985
TL;DR: In this article, the problem of selecting good binomial populations compared with a standard or a control through the empirical Bayes approach is dealt with, and two cases have been studied: one with the pior distribution completely unknown and the other with the prior distribution symmetrical about p = 1/2, but otherwise unknown.
Abstract: : This paper deals with the problem of selecting good binomial populations compared with a standard or a control through the empirical Bayes approach. Two cases have been studied: one with the pior distribution completely unknown and the other with the prior distribution symmetrical about p = 1/2, but otherwise unknown. In each case, empirical Bayes rules are derived and their rates of convergence are shown to be of order O(exp(-cn)) for some cO, where n is the number of accumulated post experiences at hand. Keywords: Statistical decision theory; Smoothing(Mathematics); Asymptotically optimal. (Author)

Journal ArticleDOI
Pierre A. Devijver1
TL;DR: An estimator is derived which is asymptotically unbiased, and whose variance can be controlled by the choice of k, which appears to be very economic in its use of samples, and quite stable even in very small sample cases.


Journal ArticleDOI
TL;DR: In this paper, the authors discuss Bayes nonparametric estimation of time-dependent failure rates for both complete and censored samples, and the prior degree-of-belief about the failure rate is expressed in the form of a hypothetical sample.
Abstract: This paper discusses Bayes nonparametric estimation of time-dependent failure rates. A point and an interval estimate of the quantiles of the failure process are given for both complete and censored samples. The prior degree-of-belief about the failure rate is expressed in the form of a hypothetical sample. A numerical example is discussed.