scispace - formally typeset
Search or ask a question

Showing papers on "Bayesian inference published in 1985"


Journal ArticleDOI
TL;DR: The topics and examples discussed in this paper are intended to promote the understanding and extend the practicability of the spline smoothing methodology.
Abstract: Non-parametric regression using cubic splines is an attractive, flexible and widely-applicable approach to curve estimation. Although the basic idea was formulated many years ago, the method is not as widely known or adopted as perhaps it should be. The topics and examples discussed in this paper are intended to promote the understanding and extend the practicability of the spline smoothing methodology. Particular subjects covered include the basic principles of the method; the relation with moving average and other smoothing methods; the automatic choice of the amount of smoothing; and the use of residuals for diagnostic checking and model adaptation. The question of providing inference regions for curves-and for relevant properties of curves--is approached via a finite-dimensional Bayesian formulation.

1,018 citations


BookDOI
01 Jan 1985

235 citations


Journal ArticleDOI
TL;DR: Methods are illustrated by application to preliminary data from a study aimed at identifying hitherto unsuspected occupational carcinogens, and prefer an approach in which all associations in the data are reported, whether significant or not, followed by a ranking in order of priority for investigation using empirical Bayes techniques.
Abstract: Epidemiologic research often involves the simultaneous assessment of associations between many risk factors and several disease outcomes. In such situations, often designed to generate hypotheses, multiple univariate hypothesis-testing is not an appropriate basis for inference. The number of true positive associations in a collection of many associations can be estimated by comparing the observed distribution of p values for the positive associations to a theoretical uniform distribution, or to the observed distribution of negative associations, or to an empiric randomization distribution. None of these approaches, however, will distinguish the true from the false positive associations. Various criteria for selecting a subset of associations to report are considered by the authors, including Bonferoni adjustment of p values, splitting the sample for searching and testing, Bayesian inference, and decision theory. The authors prefer an approach in which all associations in the data are reported, whether significant or not, followed by a ranking in order of priority for investigation using empirical Bayes techniques. Methods are illustrated by application to preliminary data from a study aimed at identifying hitherto unsuspected occupational carcinogens.

190 citations


Journal ArticleDOI
TL;DR: The classical hypothesis testing in clinical trials involving two treatments is criticized and a Bayesian approach in which sampling stops when the probability that one treatment is the better exceeds a specified value is recommended.
Abstract: This paper concerns interim analysis in clinical trials involving two treatments from the points of view of both classical and Bayesian inference. I criticize classical hypothesis testing in this setting and describe and recommend a Bayesian approach in which sampling stops when the probability that one treatment is the better exceeds a specified value. I consider application to normal sampling analysed in stages and evaluate the gain in average sample number as a function of the number of interim analyses.

98 citations


Journal ArticleDOI
TL;DR: A new method to estimate the auditory brainstem response when the electrical activity from the recording electrodes displays non-stationarity, i.e. varies between low and high levels is described, based on a statistical approach called Bayesian inference.
Abstract: The present paper describes a new method to estimate the auditory brainstem response when the electrical activity from the recording electrodes displays non-stationarity, i.e. varies between low and high levels. The method is based on a statistical approach called Bayesian inference and weights the individual components (here blocks of 250 sweeps) inversely proportional to the level of the noise activity during the recording. Fifty sets of data from 10 consecutive patients obtained during stimulation at high intensity are used to evaluate the difference between the classic averaging and the present method which is called Bayes estimation. In approximately 30% of the cases, a significant all-over improvement is obtained by the new method. The classic averaging technique would here require 50% more sweeps to be taken to obtain the same precision of the ABR estimate, on average. Also the latency and amplitude parameters of the Jv wave complex are evaluated and it is shown that the parameter variance decreases by a factor of approximately 2 by using the Bayes estimation. The new technique is compared with a similar technique recently presented by Hoke et al. (1984) and the differences and similarities are discussed.

96 citations


Journal ArticleDOI
TL;DR: In this paper, a Bayesian decision maker would update his/her probability for the occurrence of an event $A$ in the light of a number of expert opinions expressed as probabilities $q_1, \cdots, q_n$ of $A$.
Abstract: This paper examines how a Bayesian decision maker would update his/her probability $p$ for the occurrence of an event $A$ in the light of a number of expert opinions expressed as probabilities $q_1, \cdots, q_n$ of $A$. It is seen, among other things, that the linear opinion pool, $\lambda_0p + \sum^n_{i = 1} \lambda_iq_i$, corresponds to an application of Bayes' Theorem when the decision maker has specified only the mean of the marginal distribution for $(q_1, \cdots, q_n)$ and requires his/her formula for the posterior probability of $A$ to satisfy a certain consistency condition. A product formula similar to that of Bordley (1982) is also derived in the case where the experts are deemed to be conditionally independent given $A$ (and given its complement).

95 citations


Journal ArticleDOI
TL;DR: A new smoothness priors long AR model method approach is taken to the short data span spectral estimation problem and the critical computation of the likelihood of the hyperparameters of the Bayesian model is realized by a constrained least squares computation.
Abstract: A new smoothness priors long AR model method approach is taken to the short data span spectral estimation problem. An autoregressive (AR) model that is relatively long compared to the data length is considered. The smoothness priors are in the form of the integrated squared derivatives of the AR model whitening filter. A smoothness tradeoff parameter or Bayesian hyperparameter balances the tradeoff between the infidelity of the AR model to the data and the infidelity of the model to the smoothness constraint. The critical computation of the likelihood of the hyperparameters of the Bayesian model is realized by a constrained least squares computation. Numerical examples are shown. The results of simulation studies using entropy comparison evaluations of the Bayesian and minimum AIC-AR methods of spectral estimation are also shown.

86 citations


Journal ArticleDOI
TL;DR: The authors argue that both averaging and conservatism in the Bayesian task occur because subjects produce their judgments by using an adjustment strategy that is qualitatively equivalent to averaging, and two experiments are presented that show qualitative errors in the direction of revisions in Bayesian inference tasks that are well accounted for by the simple adjustment strategy.
Abstract: Two empirically well-supported research findings in the judgment literature are (1) that human judgments often appear to follow an averaging rule, and (2) that judgments in Bayesian inference tasks are usually conservative relative to optimal judgments. This paper argues that both averaging and conservatism in the Bayesian task occur because subjects produce their judgments by using an adjustment strategy that is qualitatively equivalent to averaging. Two experiments are presented that show qualitative errors in the direction of revisions in the Bayesian task that are well accounted for by the simple adjustment strategy. Also noted is the tendency for subjects in one experiment to evaluate sample evidence according to representativeness rather than according to relative likelihood. The final discussion describes task variables that predispose subjects toward averaging processes.

63 citations


Journal ArticleDOI
TL;DR: In this article, the posterior mean of a general linear function, ω, of the finite population elements of the population elements is described and a broad class of linear estimators of ω is shown to have the optimal frequentist property of minimal bounded mean squared error.
Abstract: Assuming a model appropriate for many multistage sample surveys, Bayesian predictive inference for a general linear function, ω, of the finite population elements is described. In a broad class of linear estimators of ω, the posterior mean, E″(ω), of ω is shown to have the optimal frequentist property of minimal bounded mean squared error. For the special case of three-stage sampling, E″(ω) is described in detail. Also presented are the results of an investigation of the effect on inferences of alteration of the values of some parameters in the prior distribution.

34 citations



Journal ArticleDOI
TL;DR: In this paper, the authors analyse the normal linear model with known sampling covariance structure invariant under a symmetry group, and sampling mean structure equivariant under the same group.

08 Apr 1985
TL;DR: In this article, a conditional maximization procedure is used to approximate numerical integrations, thus broadening the scope of Bayesian inference and model checking, and the mental test model possesses even broader applicability than hitherto realized.
Abstract: : Allan Birnbaum made two historical contributions to the theories of statistics and educational testing. The first was his famous 1962 proof that the sufficiency and conditionality principles together imply the likelihood principle, thus justifying conditional, e.g., Bayesian, inference when compared with frequentist inference. The second was his famous mental test model introduced in Lord and Novick (1968). This proposal attempts to follow in Birnbaum's tradition by using Bayesian ideas to show that his mental test model possesses even broader applicability than hitherto realized. A conditional maximization procedure is used to approximate numerical integrations thus broadening the scope of Bayesian inference and model checking. Additional keywords: Exchangeability, Computer programs, Normal distribution.

Book ChapterDOI
10 Jul 1985
TL;DR: Glymour as discussed by the authors showed that under the conditions assumed by Pednault et al., at most one of the items of evidence can alter the probability of any given hypothesis; thus, although updating is possible, multiple updating for any of the hypotheses is precluded.
Abstract: Duda, Hart, and Nilsson [1] have set forth a method for rule-based inference systems to use in updating the probabilities of hypotheses on the basis of multiple items of new evidence. Pednault, Zucker, and Muresan [2] claimed to give conditions under which independence assumptions made by Duda et al. preclude updating-that is, prevent the evidence from altering the probabilities of the hypotheses. Glymour [3] refutes Pednault et al.'s claim with a counterexample of a rather special form (one item of evidence is incompatible with all but one of the hypotheses); he raises, but leaves open, the question whether their result would be true with an added assumption to rule out such special cases. We show that their result does not hold even with the added assumption, but that it can nevertheless be largely salvaged. Namely, under the conditions assumed by Pednault et al., at most one of the items of evidence can alter the probability of any given hypothesis; thus, although updating is possible, multiple updating for any of the hypotheses is precluded.

Journal ArticleDOI
TL;DR: In this paper, a Bayesian solution is provided to the problem of testing whether an entire finite population shows a certain characteristic, given that all the elements of a random sample are observed to have it.
Abstract: A Bayesian solution is provided to the problem of testing whether an entire finite population shows a certain characteristic, given that all the elements of a random sample are observed to have it. This is obtained as a direct application of existing theory and, it is argued, improves upon Jeffrey's solution.

Journal ArticleDOI
TL;DR: In this article, the authors consider a sequence of posterior distributions obtained from a prior as a result of successive conditionings by the events of an admissible sequence, and identify certain statistical hypotheses whose limiting posterior probabilities converge to one.
Abstract: The formalism of operational statistics, a generalized approach to probability and statistics, provides a setting within which inference strategies can be studied with great clarity. This paper is concerned with the asymptotic behavior of the Bayesian inference strategy in this setting. We consider a sequence of posterior distributions, obtained from a prior as a result of successive conditionings by the events of an admissible sequence. We identify certain statistical hypotheses whose limiting posterior probabilities converge to one. We describe these hypotheses, and show that when the prior is vague, they contain those probability models which represent the long-run relative frequencies of occurrence for the events in the sequence.



Journal ArticleDOI
TL;DR: In this article, a characterization of diametrically opposed interests between two players is given, where either neither player is a Bayesian, or both have a unique probability and utility function up to the usual transformation, and both have many possible probabilities and utilities.
Abstract: A characterization is given of diametrically opposed interests between two players: either neither is a Bayesian, or both have a unique probability and utility function up to the usual transformation or both have many possible probabilities and utilities. In the second case, their utility functions must have representations that sum to zero, and they must have identical probability distributions on every uncertain event in the space. Implications of this result for negotiations and for game theory are discussed.

Proceedings ArticleDOI
05 Apr 1985
TL;DR: Reasoning from uncertain vehicle level data, the model-based FSA system has successfully inferred the correct locations and components of force structures up to the battalion level.
Abstract: Given a set of image-derived vehicle detections and/or recognized military vehicles, SIGINT cues and a priori analysis of terrain, the force structure analysis (FSA) problem is to utilize knowledge of tactical doctrine and spatial deployment information to infer the existence of military forces such as batteries, companies, battalions, regiments, divisions, etc. A model-based system for FSA has been developed. It performs symbolic reasoning about force structures represented as geometric models. The FSA system is a stand-alone module which has also been developed as part of a larger system, the Advanced Digital Radar Image Exploitation System (ADRIES) for automated SAR image exploitation. The models recursively encode the component military units of a force structure, their expected spatial deployment, search priorities for model components, prior match probabilities, and type hierarchies for uncertain recognition. Partial and uncertain matching of models against data is the basic tool for building up hypotheses of the existence of force structures. Hypothesis management includes the functions of matching models against data, predicting the existence and location of unobserved force components, localization of search areas and resolution of conflicts between competing hypotheses. A subjective Bayesian inference calculus is used to accrue certainty of force structure hypotheses and resolve conflicts. Reasoning from uncertain vehicle level data, the system has successfully inferred the correct locations and components of force structures up to the battalion level. Key words: Force structure analysis, SAR, model-based reasoning, hypothesis management, search, matching, conflict resolution, Bayesian inference, uncertainty.

Journal ArticleDOI
TL;DR: In this article, a single shift in parameter of a life test model is discussed and the effect of this shift on the variance is discussed, while a series ofk samples are being drawn, model itself undergoes a change.
Abstract: This paper analyses the shift in parameter of a life test model. This analysis depends on the prediction of order statistics in future samples based on order statistics in a series of earlier samples in life tests having a general exponential model. While a series ofk samples are being drawn, model itself undergoes a change. Firstly, a single shift is considered and the effect of this shift on the variance is discussed. Generalisation withs shifts (s≦k) ink samples in also taken up and the semi-or-used priors (SOUPS) have been used to get predictive distributions. Finally, shift afteri (i≦k) stages, from exponential to gamma model is considered and for this case effect of the shift on the variance as well as on the Bayesian prediction region (BPR) is analysed along with set of tables.

Journal ArticleDOI
TL;DR: In this article, under the assumption of a common prior over a signalling cost parameter, employers revising their beliefs in a Bayesian fashion converge to a separating equilibrium of Spence's (1974b) continuous ability model.




01 Jan 1985
TL;DR: Differences between estimators obtained from a frequentist's approach and a Bayes strategy with vague priors are illustrated and the Bayes results have practical advantages.
Abstract: Statistical inference is reviewed for survival data applications with hazard models having one parameter per distinct failure time and using Jeffreys' (1961) vague priors. Distinction between a discrete hazard and a piecewise exponential model is made. Bayes estimators of survival probabilities ace derived. For a single sample and a discrete hazard, the Bayes estimator is shown to be larger than Nelson's (1972) which in turn is larger than Kaplan-Meier's (1958) estimator. With a piecewise exponential model, the Bayes estimator is also shown to be larger than that using maximum likelihood. Presuming a proportional hazards formulation to incorporate covariate information and a discrete underlying hazard model, the marginal posterior distribution of the regression parameters is proportional to Breslow's (1974) approximation to the marginal likelihood of Kalbfleisch and Prentice (1973). A refinement of Breslow's (1974) approximate likelihood is obtained when a piecewise exponential model is used for the underlying hazard. These results serve as illustrations of differences between estimators obtained from a frequentist's approach and a Bayes strategy with vague priors. Further, the Bayes results have practical advantages.


01 Jan 1985
TL;DR: In this article, a modification separates the two aspects of probability: probability as a part of physical theories (factual) and as a basis for statistical inference (cognitive), and is represented by probability structures as in the earlier papers, but now built independently of the language.
Abstract: This modification separates the two aspects of probability: probability as a part of physical theories (factual), and as a basis for statistical inference (cognitive). Factual probability is represented by probability structures as in the earlier papers, but now built independently of the language. Cognitive probability is interpreted as a form of "partial truth". The paper also contains a discussion of the Principle of Insufficient Reason and of Bayesian and classical statistical methods, in the light of the new definition.

Journal ArticleDOI
TL;DR: This clinical note or reminder indicates how Bayesian inference can aid decision making in an institution.
Abstract: This clinical note or reminder indicates how Bayesian inference can aid decision making in an institution. Key references are given.