scispace - formally typeset
Search or ask a question

Showing papers on "Bayesian probability published in 1973"



Journal ArticleDOI
TL;DR: In this paper, a Bayesian procedure for estimating true mastery scores has been proposed, which uses information about other members of a student's group (collateral information), but the resulting estimation is still criterion referenced rather than norm referenced in that the student is compared to a standard rather than to other students.
Abstract: In this paper, an attempt has been made to synthesize some of the current thinking in the area of criterion-referenced testing as well as to provide the beginning of an integration of theory and method for such testing. Since criterion-referenced testing is viewed from a decision-theoretic point of view, approaches to reliability and validity estimation consistent with this philosophy are suggested. Also, to improve the decision-making accuracy of criterion-referenced tests, a Bayesian procedure for estimating true mastery scores has been proposed. This Bayesian procedure uses information about other members of a student's group (collateral information), but the resulting estimation is still criterion referenced rather than norm referenced in that the student is compared to a standard rather than to other students. In theory, the Bayesian procedure increases the “effective length” of the test by improving the reliability, the validity, and more importantly, the decision-making accuracy of the criterion-referenced test scores.

240 citations



Journal ArticleDOI
TL;DR: The identification method is based on Bayes's theorem and allows for dependent tests and missing data in the probability matrix and suggests a definite identification only if the Bayesian probability of one of the taxa exceeds a threshold level.
Abstract: Summary: The methods incorporated in the computer program used in a trial of computer-aided identification of bacteria are described. The identification method is based on Bayes's theorem and allows for dependent tests and missing data in the probability matrix. It was found useful in developing the method to take account of the occurrence of errors in bacteriological testing. The method suggests a definite identification only if the Bayesian probability of one of the taxa exceeds a threshold level; if not, a separate procedure selects the best tests to continue the identification.

144 citations


Journal ArticleDOI
TL;DR: In this article, a two-stage prior distribution is constructed which assumes that probabilities corresponding to adjacent intervals are likely to be closely related, and posterior estimates are obtained which combine information between the intervals and have the practical effect of smoothing the histogram.
Abstract: SUMMARY This paper describes a Bayesian procedure for the simultaneous estimation of the proba- bilities in a histogram. A two-stage prior distribution is constructed which assumes that probabilities corresponding to adjacent intervals are likely to be closely related. The method employs multivariate logit transformations, and a covariance structure similar to that assumed in the first-order autoregressive process. Posterior estimates are obtained which combine information between the intervals and have the practical effect of smoothing the histogram. A weakness of the Bayesian approach has been its inability to cope with independent observations whose common distribution is not restricted to any particular family. We seek to remedy this deficiency by providing a technique for the analysis of n observations, which are assumed independent and identically distributed with unknown density q(y) which is concentrated on a finite interval I of the real line. We will assume that q(y) is thought a priori to possess a continuous first derivative for all y eI, or to possess some similar property of smoothness. We are posed with the problem of how to obtain estimates for q(y) and its moments which take account of this prior informa- tion. The problem will be treated by using a histogram to approximate q(y), and by esti- mating the probabilities in the histogram under the assumption that they are related in a certain manner. A disadvantage of our method is that the histogram estimate for q(y) will be discontinuous at several points in 1, and will not usually satisfy the smoothness property assumed a priori for the theoretical density. However, we hope that this will to some extent be compensated for by the advantages of the estimation procedure proposed for the probabilities in the histogram. Good & Gaskins (1971) and Boneva, Kendall & Stefanov (1971) provided sampling theory methods for the estimation of a density. An advantage of a Bayesian approach is that it takes proper account of the prior information, since the latter may be incorrectly emphasized when basing the analysis on intuitive ideas. Our method will be fairly flexible in allowing information about the degree of smoothness, and the shape, of the density to be incorporated into the prior model.

83 citations


Journal ArticleDOI
TL;DR: In this article, the problem of testing the existence of a trend in the means Gi of Poisson distributions is considered. But it is assumed that these means are changing exponentially, that is, log Gi = ci+/x2.
Abstract: SUMMARY This paper is concerned with the problem of testing the existence of a trend in the means Gi of Poisson distributions. It is assumed that these means are changing exponentially, that is, log Gi = ci+/x2. A classical method is reviewed which is used for testing the hypothesis P = 0. The exact Bayesian distribution for P is derived and a Bayesian approximation suggested which proved to be very useful. Finally, a comparison of these three methods by means of numerical examples is made.

50 citations


Journal ArticleDOI
TL;DR: The objective of this paper is to present a method of detection of spuriosity using a Bayesian approach, and the posterior of the “shift parameter a” is obtained, and it is shown to be a weighted combination of (Univariate) generalized t-distributions.
Abstract: The objective of this paper is to present a method of detection of spuriosity using a Bayesian approach. The model used is very specific—all observations are hopefully generated independently from the same normal source, N(μ, σ2), but it is feared that one of these may come from a spurious source, that of N(μ + a, σ2). The posterior of the “shift parameter a” is obtained, and it is shown to be a weighted combination of (Univariate) generalized t-distributions. The weights are very interesting and give much information, as do the separate generalized t-distributions. Examples are given to illustrate the use of the posterior of a, and the weights, etc. The procedure generalizes easily to the multivariate case, i.e., where the observations are hopefully generated independently from N(, ), but the fear exists that one of the observations is spurious from N( + a, ). A brief summary of results for this case is given.

46 citations


Journal ArticleDOI
TL;DR: In this article, a Bayesian approach to the analysis of paired comparison experiments is presented, where two statistical models are considered: the multibinomial and the Bradley-Terry model, and the rankings determined by the two estimators are identical and are Bayes with respect to a large class of acceptable loss functions.
Abstract: SIUMMARY A Bayesian approach to the analysis of paired comparison experiments is presented. Two statistical models are considered: the multibinomial and the Bradley-Terry model. For each, the natural conjugate family of priors is used to represent prior beliefs, and the parameters of the prior are given meaningful interpretations. Two different estimators are suggested for the model parameters, the preference probabilities of the multibinomial model and the worth parameters of the Bradley-Terry model. For the Bradley-Terry model the estimated worth parameters can be used to rank the set of objects. When the posterior distribution satisfies a criterion of balance, the rankings determined by the two estimators are identical and are Bayes with respect to a large class of acceptable loss functions. Furthermore, the Bayes ranking can be easily obtained without explicitly evaluating the estimators of the worth parameters.

41 citations


Journal ArticleDOI
TL;DR: The field of investment analysis provides an example of a situation in which individuals or corporations make inferences and decisions in the face of uncertainty about future events, and it is necessary to take account of this uncertainty when modeling inferential or decision-making problems relating to investment analysis as discussed by the authors.
Abstract: The field of investment analysis provides an example of a situation in which individuals or corporations make inferences and decisions in the face of uncertainty about future events. The uncertainty concerns future security prices and related variables, and it is necessary to take account of this uncertainty when modeling inferential or decision-making problems relating to investment analysis. Since probability can be thought of as the mathematical language of uncertainty, formal models for decision making under uncertainty require probabilistic inputs. In financial decision making, this is illustrated by the models that have been developed for the portfolio selection problem; such models generally require the assessment of probability distributions (or at least some summary measures of probability distributions) for future prices or returns of the securities that are being considered for inclusion in the portfolio (e.g., see Markowitz [11] and Sharpe [19]).

34 citations


Journal ArticleDOI
TL;DR: Bayesian estimators of the parameters of the Weibull hazard function in the restoration process case are presented and it is shown that the model does yield lower machine operating costs than the non-Bayesian approach.
Abstract: This article presents Bayesian estimators of the parameters of the Weibull hazard function in the restoration process case. These estimators are used in a model which optimizes the interval between machine overhauls. Since the properties of the model are not in closed form, a simulation experiment is used to evaluate the effectiveness of the model. The simulation results show that the model does yield lower machine operating costs than the non-Bayesian approach. The effectiveness of the model could be increased by improvements in the quality of the prior estimates used in Bayesian estimation.

31 citations


Journal ArticleDOI
TL;DR: In this paper, a normative 2-stage model for incorporating reliability measurements of data-reporting sources in a Bayesian inference system is presented, where human subjects are asked to make intuitive inferences about two hypotheses on the basis of sample data which were reported with a given reliability.
Abstract: A normative 2-stage model for incorporating reliability measurements of data-reporting sources in a Bayesian inference system is presented. An experiment required human subjects to make intuitive inferences about two hypotheses on the basis of sample data which were reported with a given reliability. When compared with the optimal model, subjects exhibited systematic errors in estimating the diagnostic impact of less than perfectly reliable data. Their responses reflected the use of specific nonoptimal heuristic strategies to process the information. A utility function was added to the normative model to illustrate how a best choice might be made from among potential data-gathering experiments whose costs increase with their reliabilities. Recommendations for using computer aids to enhance efficiency in inference systems are made

Journal ArticleDOI
TL;DR: In this article, the estimation of a quadratic regression curve with positive coefficients is treated by a Bayesian method, which extends easily to deal with a general linear model under parameter constraints.
Abstract: SUMMARY The estimation of a quadratic regression curve when the quadratic coefficient is known to be positive is treated by a Bayesian method. The method extends easily to deal with a general linear model under parameter constraints.

Journal ArticleDOI
TL;DR: In this article, a procedure for specifying confidence intervals, from possibly censored data, for any characteristic of a two-parameter Weibull population was developed for estimating relative frequency interpretation for uncensored and Type II censored data.
Abstract: Procedures are developed for specifying confidence intervals, from possibly censored data, for any characteristic of a two-parameter Weibull population. Although the methods are formally Bayesian, they have exact relative frequency interpretation for uncensored and Type II censored data. Some simulation results are given to indicate relative frequency behavior for Type 1 censoring, and for comparison to other methods.

Journal ArticleDOI
TL;DR: A probabilistic model for interactive retrieval that applies the principles of Bayesian statistical decision theory to the problem of optimally restructuring a search strategy in an interactive environment is presented.


Journal ArticleDOI
TL;DR: In this article, a sampling-theory analysis of the CD production function was presented, in which nondata-based prior information was added to aggregate time-series information in order to deal with what appears to be a multicollinearity problem.
Abstract: IN ANALYSES OF PRODUCTION-FUNCTION MODELS employing aggregate time-series data, it is often difficult to obtain precise estimates of parameters because input variables are usually highly intercorrelated. For example, in the case of a CobbDouglas (CD) production function with neutral technical change, the labor, capital, and time-trend variables are frequently highly intercorrelated, a fact that may lead to imprecise parameter estimates and, as will be seen below with U.S. data, highly implausible point estimates for certain parameters. It is generally recognized that introduction of prior information is one possible way of dealing with the above problems-sometimes referred to as multicollinearity problems. Another course of action in dealing with these problems is to extend the data base-say, by combining cross-section data with the available timeseries data. In both approaches, or in a combination of them, additional information is added to the information in the aggregate time-series sample in an attempt to improve the quality of inferences. In the present paper, we review a sampling-theory analysis of the CD production function put forward by Morishima and Saito [3], in which nondata-based prior information was added to aggregate time-series information in order to deal with what appears to be a multicollinearity problem. Then we turn to an analysis of the Morishima-Saito (MS) production function problem, using the. Bayesian approach in which nondata-based prior information is introduced by use of prior probability density functions (pdf's) for the parameters. By the use of prior pdf's in the Bayesian approach, it will be seen that prior information can be introduced in a rather flexible manner, and posterior pdf's for parameters of interest, that reflect both sample and prior information, can be readily computed. In particular, we wish to assess how sensitive inferences about the technical-change parameter and other parameters are to the form and manner in which prior information is introduced. The plan of the paper is as follows. In Section 2, we state the problem and review some sampling-theory results. We then present Bayesian analyses of the problem in Section 3 and provide a summary of results and some concluding remarks in Section 4.


Journal ArticleDOI
TL;DR: In this paper, the problem of drawing inferences on the difference of the population means when an incomplete sample from a bivariate normal population is available is considered from a Bayesian approach.
Abstract: We consider the problem of drawing inferences on the difference of the population means when an incomplete sample from a bivariate normal population is available. Whereas Mehta and Gurland [7] have considered this problem from the sampling theory point of view, we tackle here the same problem from a Bayesian approach.

Journal ArticleDOI
TL;DR: In this article, the authors examined the use of a two-way random effects model with correlated errors and additional explanatory variables in combining cross-section with time series data, and developed methods for computing posterior distributions of slope coefficients.
Abstract: This article examines the use of a two-way, random-effects, model with correlated errors and additional explanatory variables in combining cross-section with time series data. This model has been analyzed from a Bayesian viewpoint. Methods are developed for computing posterior distributions of slope coefficients. The advantage of our approach over sampling theory approaches is briefly discussed. It has been shown how one can obtain reasonable inferences about slope coefficients which are the parameters of interest, in the presence of nonestimable nuisance parameters by judicious use of sample and prior information.



Journal Article
TL;DR: The two approaches may be combined by defiing the data worth through simulation under the condition of assumed statistical properties and releasing the conditioning by making use of the prior distributions of the unknown statistics obtained from the Bayesian approach.
Abstract: Reviews of current methodologies for the determination of the worth of hydrologic data indicate that each method has certain shortcomings. The simulation approach requires information concerning the statistical properties of the data that are not usually known. The Bayesian Approach often leads to mathematically intractable relations. The two approaches may be combined by defiing the data worth through simulation under the condition of assumed statistical properties and releasing the conditioning by making use of the prior distributions of the unknown statistics obtained from the Bayesian approach. This combined approach circumvents the problems encountered when either the Bayesian or the simulation approach is used exclusively. An example illustrates the use of the combined approach in evaluation of the worth of flood data in the design of highway crossings.

Journal ArticleDOI
TL;DR: In this paper, structural inference is employed to develop the unique probability density functions for the location and scale parameters of the double exponential distribution which is the asymptotic statistical model of maximal extreme values.
Abstract: The method of structural inference is employed to develop the unique probability density functions for the location and scale parameters of the double exponential distribution which is the asymptotic statistical model of maximal extreme values. The corresponding structural prediction density of a maximal extreme observation is deduced. These structural densities are only conditional on a given sample of extremal observations. Expected values of parameters and future observations are derived, and their applications in decision theory are pointed out. The Bayesian interpretation of the structural parameter densities reveals that these densities are conjugate and that the implied prior densities are familiar ones in Bayesian analysis.

01 Jun 1973
TL;DR: Bayesian fixed times tests, Bayesian/Classical fixed time tests, and Sequential Bayesian tests were developed and tabulated and showed that updates in the prior distribution are easily made.
Abstract: : alled Bayes/Classical in this report) when the producer and consumer cannot agree on a prior distribution; Develop methods of updating existing prior distributions; Develop a preliminary military standard for BRDT; Investigate some special problems; Fit additional prior distributions Bayesian fixed times tests, Bayesian/Classical fixed time tests, and Sequential Bayesian tests were developed and tabulated These tests form an essential part of the preliminary military standard which was also developed Additional fits of the inverted gamma distribution reconfirmed its choice as a prior distribution and further study showed that updates in the prior distribution are easily made A test based on probability of acceptance is satisfactory to test for shifts in the prior distribution Tables were developed giving the truncation points for the sequential tests At this time, no satisfactory solution has been found for placing more than one equipment on test at a time

Journal ArticleDOI
TL;DR: The Bayesian approach to finite population estimation presented in this paper, requires the use of the super-population concept of Fisher (1956), Cochran (1939, 1946), and others as discussed by the authors.
Abstract: The Bayesian approach to finite population estimation presented in this paper, requires the use of the super-population concept of Fisher (1956), Cochran (1939, 1946), and others. In this framework, the finite population under consideration is assumed to be a random sample from some hypothetical infinite population. Here, we assume that this hypothetical infinite population has a probability distribution function of known form. This latter assumption has been used by Ericson (1969) in a Bayesian framework, and by Kalbfleisch and Sprott (1956) in a fiducial framework. In this paper, we follow the approach of Fisher (1956) and Kalbfleisch and Sprott (1968) and use a two step procedure to obtain finite population estimates. In general terms, this two step procedure is as follows. First, we use the sample drawn from the finite population to make an inference concerning the parameters of the distribution of the superpopulation. For a Bayesian, this means that we find the posterior distribution of the parameter...

Journal ArticleDOI
TL;DR: In this paper, the shape of the failure rate in terms of h(z \ a) and g(a) is investigated. But the authors focus on the conditional failure rate.
Abstract: The widely used failure rate often has uncertain parameters, and the fact of non-failure yields information on these parameters. As Lucas [4] pointed out, specification of a prior distribution, g(a), on the parameter vector, a, leads via Bayesian analysis from a conditional failure rate, h(z-a), to a modified rate, h*(z). Of interest are results about the shape of h*(z) in terms of h(z \ a) and g(a). A key result of the article is:

Journal ArticleDOI
TL;DR: In this article, the authors attempted to show the way in which Bayesian and classical approaches are both similar and divergent, by relating problem variables to the statistical problem continuum, the question of which approach is most appropriate in a particular situation can be more readily resolved.
Abstract: We have attempted to show the way in which Bayesian and classical approaches are both similar and divergent. The vehicle for discussion, primarily, involved considerations of cost consequences, planning horizons and numbers of alternatives. In turn, these variables were related to the statistical problem continuum, with classical and Bayesian approaches at opposite ends of the scale. By relating problem variables to this continuum, the question of which approach is most appropriate in a particular situation can be more readily resolved.

Book ChapterDOI
01 Jan 1973
TL;DR: In this paper, the authors discuss the various complexities that exist in the use of probabilities in applied situations and explore a complexity that falls between these and that is concerned with a parameter that becomes redundant if symmetrical error is used.
Abstract: Publisher Summary This chapter discusses the various complexities that exist in the use of probabilities in applied situations. The most prominent of these has long been recognized and concerns conditional probability given an event having zero probability. A partition is needed to define the necessary limiting process. At another extreme is the bayesian claim that probabilities can describe all unknowns. The chapter explores a complexity that falls somewhere between these and that is concerned with a parameter that becomes redundant if symmetrical error is used; information that specifies the value of the symmetry parameter may produce different results if applied formally to the conditional distribution describing the realization or if used realistically to modify the model. The complexity examined in the chapter concerns a multidimensional parameter and the effect of information that changes the dimension of the parameter. The complexity arises with a probability space model, and it arises in the corresponding bayesian analysis based on the conventional right invariant prior. The complexity is resolved for the probability space model. Its resolution for bayesian theory would presumably need to wait until there is a bayesian substantiation of the use of the right invariant prior.


Journal ArticleDOI
TL;DR: Scheinok's (1972) empirical results, obtained from using Bahadur's expansion in Bayes's theorem, are explained by noting that the expansion is an exact representation of observed probabilities and thus no information was gained by its use.
Abstract: Scheinok's (1972) empirical results, obtained from using Bahadur's expansion in Bayes's theorem, are explained by noting that the expansion is an exact representation of observed probabilities and thus no information was gained by its use. The calculated and observed joint probability distributions will always be equal. It is also demonstrated that posterior probabilities equal to the ratio of observed patients with a given profile in a disease category to the total number of patients with the symptom profile are always obtained when actuarial probability estimates are used in Bayes's theorem.