scispace - formally typeset
Search or ask a question

Showing papers on "Bayesian probability published in 1986"


Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of economic forecasting, the justification for the Bayesian approach, its implementation, and the performance of one small BVAR model over the past five years.
Abstract: The results obtained in five years of forecasting with Bayesian vector autoregressions (BVAR's) demonstrate that this inexpensive, reproducible statistical technique is as accurate, on average, as those used by the best known commercial forecasting services. This article considers the problem of economic forecasting, the justification for the Bayesian approach, its implementation, and the performance of one small BVAR model over the past five years.

1,115 citations


Journal ArticleDOI
TL;DR: In this paper, a general framework for estimating breeding value with or without selection or assortative mating, including the situation where variances and covarianees are unknown, is presented.
Abstract: This paper proposes the Bayesian approach as a conceptual strategy to solve problems arising in animal breeding theory. General elements of Bayesian inference, e.g., prior and posterior distributions, informative vs noninformative priors, likelihood functions, finite samples, \"memory\" properties and integration of nuisance parameters are illustrated with animal breeding examples. Shrinkage estimators are discussed from a Bayesian viewpoint. A general framework for estimation of breeding value with or without selection or assortative mating, including the situation where variances and covarianees are unknown, is presented. Selection indexes, best linear unbiased prediction, nonlinear merit functions, nonlinear models and estimation of genetic parameters are discussed from a Bayesian perspective. (

364 citations


Journal ArticleDOI
TL;DR: In this paper, a Bayesian approach to estimation and hypothesis testing for a Poisson process with a change point was developed, and an example was given, where a change-point was considered.
Abstract: SUMMARY A Bayesian approach to estimation and hypothesis testing for a Poisson process with a change-point is developed, and an example given.

238 citations


Book
S K Sinha1
01 Jan 1986
TL;DR: In this article, the exponential failure model Gamma and Weibull Distributions Normal and Related Distributions Mixture Distributions and Competing Risks Tests of Hypotheses and Confidence Intervals Bayes Estimators Bayesian Approximation and Reliability Estimation Bayesian Intervals for Parameters and reliability Functions Reliability of Series/Parallel Systems Appendixes References Author Index Subject Index.
Abstract: Foreword Preface Introduction Exponential Failure Model Gamma and Weibull Distributions Normal and Related Distributions Mixture Distributions and Competing Risks Tests of Hypotheses and Confidence Intervals Bayes Estimators Bayesian Approximation and Reliability Estimation Bayesian Intervals for Parameters and Reliability Functions Reliability of Series/Parallel Systems Appendixes References Author Index Subject Index.

170 citations




Book ChapterDOI
TL;DR: In this paper, a discussion of four results about the principle of maximizing entropy and its connections with Bayesian theory is presented. But the results are restricted to the case where all empirical constraints imposed on the MAXENT solution are satisfied in each measure space.
Abstract: This essay is, primarily, a discussion of four results about the principle of maximizing entropy (MAXENT) and its connections with Bayesian theory. Result 1 provides a restricted equivalence between the two where the Bayesian model for MAXENT inference uses an a priori probability that is uniform, and where all MAXENT constraints are limited to 0–1 expectations for simple indicator-variables. The other three results report on an inability to extend the equivalence beyond these specialized constraints. Result 2 establishes a sensitivity of MAXENT inference to the choice of the algebra of possibilities even though all empirical constraints imposed on the MAXENT solution are satisfied in each measure space considered. The resulting MAXENT distribution is not invariant over the choice of measure space. Thus, old and familiar problems with the Laplacean principle of Insufficient Reason also plague MAXENT theory. Result 3 builds upon the findings of Friedman and Shimony (1971,1973) and demonstrates the absence of an exchangeable, Bayesian model for predictive MAXENT distributions when the MAXENT constraints are interpreted according to Jaynes’ (1978) prescription for his (1963) Brandeis Dice problem. Last, Result 4 generalizes the Friedman and Shimony objection to cross-entropy (Kullback-information) shifts subject to a constraint of a new odds-ratio for two disjoint events.

112 citations


Journal ArticleDOI
01 Aug 1986-Ecology
TL;DR: A sequential Bayes computational algorithm, suitable for microcomputers, is given and a plot of successive posterior distributions can be used as a visual diagnostic of conformity with basic assumptions.
Abstract: Traditional analyses (e.g., Schnabel 1938 or Chapman 1954) of sequential mark—recapture experiments (Petersen and Schnabel type) yield population estimates with substantial negative bias and overly large confidence intervals if the combination of the number of animals marked and examined falls too low. To address these problems, sequential mark—recapture experiments are cast into a Bayesian framework using a "noninformative" discrete uniform improper prior (a priori theoretical) distribution. Some properties of the posterior distribution (probability of each population size given the data) are briefly and informally discussed (inference, convergence, mean, mode, median, and treatment of nuisance parameters). A sequential Bayes computational algorithm, suitable for microcomputers, is given. Several examples are presented as a practical guide to computing estimates. For relatively small sample sizes, the Bayesian approach yields larger mean abundance estimates than traditional methods. There is little difference in these estimates for larger sampling efforts. Advantages of the approach include the following: the probability of observing the data at all feasible population sizes is calculated exactly; the method works for all cases regardless of sample size or sampling procedure; a plot of successive posterior distributions can be used as a visual diagnostic of conformity with basic assumptions; and finally, inferences can be made directly, since the end product completely describes the uncertainty of the population size given the data.

108 citations


01 Jan 1986
TL;DR: In this paper, the authors cast sequential mark-recapture experiments into a Bayesian framework using a "noninformative" discrete uniform improper prior (a priori the-obical) distribution.
Abstract: Traditional analyses (e.g., Schnabel 1938 or Chapman 1954) of sequential mark-recap- ture experiments (Petersen and Schnabel type) yield population estimates with substantial negative bias and overly large confidence intervals if the combination of the number of animals marked and examined falls too low. To address these problems, sequential mark-recapture experiments are cast into a Bayesian framework using a "noninformative" discrete uniform improper prior (a priori the- oretical) distribution. Some properties of the posterior distribution (probability of each population size given the data) are briefly and informally discussed (inference, convergence, mean, mode, median, and treatment of nuisance parameters). A sequential Bayes computational algorithm, suitable for microcomputers, is given. Several examples are presented as a practical guide to computing estimates. For relatively small sample sizes, the Bayesian approach yields larger mean abundance estimates than traditional methods. There is little difference in these estimates for larger sampling efforts. Advantages of the approach include the following: the probability of observing the data at all feasible population sizes is calculated exactly; the method works for all cases regardless of sample size or sampling procedure; a plot of successive posterior distributions can be used as a visual diagnostic of conformity with basic assumptions; and finally, inferences can be made directly, since the end product completely describes the uncertainty of the population size given the data.

104 citations


Journal ArticleDOI
TL;DR: A recently developed statistical model, called Bayesian vector autoregression (BVAR), has proven to be a useful tool for economic forecasting as mentioned in this paper, predicting a strong resurgence of growth in the second half of 1985 and in 1986.
Abstract: A recently developed statistical model, called Bayesian vector autoregression, has proven to be a useful tool for economic forecasting. Such a model today forecasts a strong resurgence of growth in the second half of 1985 and in 1986.

90 citations


Journal ArticleDOI
TL;DR: In this paper, item response curves for a set of binary responses are studied from a Bayesian viewpoint of estimating the item parameters, and the EM algorithm is used to compute the posterior mode.
Abstract: Item response curves for a set of binary responses are studied from a Bayesian viewpoint of estimating the item parameters. For the two-parameter logistic model with normally distributed ability, restricted bivariate beta priors are used to illustrate the computation of the posterior mode via the EM algorithm. The procedure is illustrated by data from a mathematics test.

Journal ArticleDOI
TL;DR: In this paper, the authors characterize all externally Bayesian pooling formulas and give conditions under which the opinion of the group will be proportional to the geometric average of the individual densities.
Abstract: When a panel of experts is asked to provide some advice in the form of a group probability distribution, the question arises as to whether they should synthesize their opinions before or after they learn the outcome of an experiment. If the group posterior distribution is the same whatever the order in which the pooling and the updating are done, the pooling mechanism is said to be externally Bayesian by Madansky (1964). In this paper, we characterize all externally Bayesian pooling formulas and we give conditions under which the opinion of the group will be proportional to the geometric average of the individual densities.

Journal ArticleDOI
TL;DR: Practical aspects of a new technique for monitoring and controlling the predictive performance of Bayesian forecasting models are discussed and an associated method of automatically detecting and rejecting outliers and adapting models to abrupt structural changes in the time series is discussed.
Abstract: Practical aspects of a new technique for monitoring and controlling the predictive performance of Bayesian forecasting models are discussed. The basic features of the approach to model monitoring introduced in a general setting in West (1986) are described and extended to a wide class of dynamic, nonnormal, and nonlinear Bayesian forecasting models. An associated method of automatically detecting and rejecting outliers and adapting models to abrupt structural changes in the time series is also discussed. The resulting forecast monitoring and control scheme is simply constructed and applied and is illustrated in two applications.

Journal ArticleDOI
TL;DR: In this article, a bootstrap estimate of the sampling density of a robust estimator is used to replace the likelihood in Bayes's formula, which is performed without direct knowledge of the error distribution.
Abstract: SUMMARY Bayesian analysis is subject to the same kinds of misspecification problems which motivate the robustness and nonparametric literature. We present a method of incorporating prior information which performs well without direct knowledge of the error distribution. This is accomplished by replacing the likelihood in Bayes's formula by a bootstrap estimate of the sampling density of a robust estimator. The flexibility of the method is illustrated by examples, and its performance relative to standard Bayesian techniques is evaluated in a Monte Carlo study.

Journal ArticleDOI
TL;DR: It is pointed out that a Bayesian forecasting procedure performed better according to an average mean square error (MSE) criterion than the many other forecasting procedures utilized in the forecasting experiments reported in an extensive study by Makridakis et al. (1982).

01 Jan 1986
TL;DR: In this article, the authors present a technique called sampling the future for including this feature in both the estimation and forecasting stages, which allows the analyst to subjectively modify the posterior distribution on the parameters and produce alternate forecasts.
Abstract: The Box-Jenkins methodology for modeling and forecasting from univariate time series models has long been considered a standard to which other forecasting techniques have been compared. To a Bayesian statistician, however, the method lacks an important facet-a provision for modeling uncertainty about parameter estimates. We present a technique called sampling the future for including this feature in both the estimation and forecasting stages. Although it is relatively easy to use Bayesian methods to estimate the parameters in an autoregressive integrated moving average (ARIMA) model, there are severe difficulties in producing forecasts from such a model. The multiperiod predictive density does not have a convenient closed form, so approximations are needed. In this article, exact Bayesian forecasting is approximated by simulating the joint predictive distribution. First, parameter sets are randomly generated from the joint posterior distribution. These are then used to simulate future paths of the time series. This bundle of many possible realizations is used to project the future in several ways. Highest probability forecast regions are formed and portrayed with computer graphics. The predictive density's shape is explored. Finally, we discuss a method that allows the analyst to subjectively modify the posterior distribution on the parameters and produce alternate forecasts.


Journal ArticleDOI
TL;DR: Two-filler formulae for the Bayes solution of the fixed-interval discrete-time nonlinear smoothing problem are obtained and known smoothing results for the linear gaussian case are interpreted in the light of the general bayesian results.
Abstract: Two-filler formulae for the Bayes solution of the fixed-interval discrete-time nonlinear smoothing problem are obtained. The smoothed a posteriori density is computed under the assumptions of a general Markov signal observed through a general memoryless noisy channel. The case where there is feedback from the observation to the signal is also considered. The derived algorithms complement a two-pass algorithm obtained under somewhat more restrictive assumptions by Askar and Derin (1981). Known smoothing results for the linear gaussian case are interpreted in the light of the general bayesian results

Journal ArticleDOI
TL;DR: In this article, the authors present a technique called sampling the future for including this feature in both the estimation and forecasting stages of the Box-Jenkins methodology for univariate time series models.
Abstract: The Box–Jenkins methodology for modeling and forecasting from univariate time series models has long been considered a standard to which other forecasting techniques have been compared. To a Bayesian statistician, however, the method lacks an important facet—a provision for modeling uncertainty about parameter estimates. We present a technique called sampling the future for including this feature in both the estimation and forecasting stages. Although it is relatively easy to use Bayesian methods to estimate the parameters in an autoregressive integrated moving average (ARIMA) model, there are severe difficulties in producing forecasts from such a model. The multiperiod predictive density does not have a convenient closed form, so approximations are needed. In this article, exact Bayesian forecasting is approximated by simulating the joint predictive distribution. First, parameter sets are randomly generated from the joint posterior distribution. These are then used to simulate future paths of the time se...

01 Jan 1986
TL;DR: Akaike's Bayesian Information Criterion (ABIC) was used in this paper to detect changes in the magnitude-frequency relation, and the frequency of larger shocks in comparison with smaller shocks increased about 1 year preceding the main shocks.
Abstract: Based on Akaike's Bayesian Information Criterion (ABIC), a new method is proposed for detecting changes in the magnitude-frequency relation. Earthquakes are classified into a few magnitude classes and the ratio between the numbers of shocks in these classes is estimated by using ABIC. The time variation of the magnitude-frequency relation is investigated by this method for the source areas of three events with magnitude 6.0 and above in central Japan; the Southern Ibaraki Earthquake of 1983, the Eastern Yamanashi Earthquake of 1983 and the Western Nagano Earthquake of 1983. In.the southern Ibaraki and the Eastern Yamanashi regions, the frequency of larger shocks in comparison with smaller shocks increased about 1 year preceding the main shocks. In the Western Nagano region, the number of large shocks may have increased 2 years before the main shock. The time variations of the former cases are statistically significant.

Proceedings ArticleDOI
01 Dec 1986
TL;DR: It is argued that Bayesian methodology is an appropriate tool in certain simulation contexts and possible remedies to computational problems, specific to simulation applications, are outlined.
Abstract: It is argued that Bayesian methodology is an appropriate tool in certain simulation contexts. Computational problems, specific to simulation applications, are then described in some detail; possible remedies are also outlined.

ReportDOI
01 Mar 1986
TL;DR: In this paper, a comparison of two methodologies for the analysis of uncertainty in risk analyses is presented, one methodology combines approximate methods for confidence interval estimation of system reliability with a bounding approach for information derived from expert opinion, and the other method employs Bayesian arguments to construct probability distributions for component reliabilities using data from experiments and observation and expert opinion.
Abstract: A comparison of two methodologies for the analysis of uncertainty in risk analyses is presented. One methodology combines approximate methods for confidence interval estimation of system reliability with a bounding approach for information derived from expert opinion. The other method employs Bayesian arguments to construct probability distributions for component reliabilities using data from experiments and observation and expert opinion. The system reliability distribution is then derived using a conventional Monte Carlo analysis. An extensive discussion of the differences between confidence intervals and Bayesian probability intervals precedes the comparison. The comparison is made using a trial problem from the Arkansas Nuclear One-Unit 1 Nuclear Power Plant. It is concluded that the Maximus/Bounding methodology tends to produce somewhat longer intervals than the Bayes/Monte Carlo method, although this finding is based on comparisons made under nonidentical assumptions regarding the treatment of operator recovery rates. The Bayes/Monte Carlo method is shown to produce useful information regarding the importance of uncertainty about each component's reliability in determining overall uncertainty. 8 refs., 31 figs., 5 tabs.

Journal ArticleDOI
TL;DR: Methods of simulating the frequency coverage of an interval estimate bysimulating the average Bayesian posterior probability coverage are presented and can be much more efficient than the standard method that simulates the hit rate of the interval.
Abstract: SUMMARY Methods of simulating the frequency coverage of an interval estimate by simulating the average Bayesian posterior probability coverage are presented. The methods can be much more efficient than the standard method that simulates the hit rate of the interval. The possible increased efficiency is illustrated using three examples: estimating a binomial probability, bootstrapping a variance, and multiple imputation intervals for the mean.

Journal Article
TL;DR: A formula relating prior and posterior probabilities of paternity, based solely on genetic marker testing results (exclusion or nonexclusion), is reiterated as a substitute for the current paternity index.
Abstract: In a recent publication, Li and Chakravarti claim to have shown that the paternity index is not a likelihood ratio They present a method of estimating the prior probability of paternity from a sample of previous court cases on the basis of exclusions and nonexclusions They propose calculating the posterior probability on the basis of this estimated prior and the test result expressed as exclusion/nonexclusion Their claim is wrong—the paternity index is a likelihood-ratio, that is, the ratio of the likelihood of the observation conditional on the two mutually exclusive hypotheses Their proposed method of estimating the prior has been long known, has been applied to several samples, and is inferior (in terms of variance of the estimate) to maximum likelihood estimation based on all the phenotypic information available Their proposed “new method” of calculating a posterior probability is based on the use of a less informative likelihood ratio 1/(1 – PE) instead of Gurtler's fully informative paternity index X/Y (Acta Med Leg Soc Liege 9:83–93, 1956), but is otherwise indentical to the Bayesian approach originally introduced by Essen-Moller in 1938

Journal ArticleDOI
Robert F. Bordley1
TL;DR: The standard approach to combining n expert forecasts involves taking a weighted average. as mentioned in this paper deduced their proposal from Bayesian principles, and found that their formula is equivalent to taking the weighted average of the n expert forecast plus the decision-maker's prior forecast.
Abstract: The standard approach to combining n expert forecasts involves taking a weighted average. Granger and Ramanathan proposed introducing an intercept term and unnormalized weights. This paper deduces their proposal from Bayesian principles. We find that their formula is equivalent to taking a weighted average of the n expert forecasts plus the decision-maker's prior forecast.

Book ChapterDOI
01 Jan 1986
TL;DR: In this article, a Bayesian approach is used to measure the amount of information about some parameter o that is present in a set of data X that a decision maker (DM) is uncertain about.
Abstract: The central topic of this paper is the measurement of the amount of information about some parameter o that is present in a set of data X. The parameter o can be any quantity such that a decision maker (DM) is uncertain about its value. We follow a Bayesian approach and assume that the DM can represent his uncertainty at any stage of the learning process in terms of a subjective probability distribution over the parameter space Ω of all possible values of o. This distribution, in turn, will be represented by a generalized probability density function (gpdf) ξ with respect to some fixed σ-finite measure λ on Ω.

Journal ArticleDOI
TL;DR: In this article, the authors present two examples of statistical modeling in a Bayesian framework, to demonstrate both the scope of current integration methods and the ease with which a sensitivity analysis can be carried out.
Abstract: Numerical integration techniques now exist which permit a very flexible approach to Bayesian modelling. Low dimensional summaries can be extracted from joint posterior distributions of up to 15 dimensions. The sensitivity of particular summaries to changes in both the model and the prior can thus be investigated. This paper discusses the various aspects of sensitivity in a Bayesian analysis and demonstrates something of the power of numerical integration via two examples. Practical data analysis consistent with the Bayesian paradigm has recently been given a substantial boost with the development of efficient high-dimensional numerical integration methods. These allow the computation of posterior moments and low dimensional summaries such as univariate and bivariate marginal distributions from high-dimensional posterior densities. For early examples using Monte Carlo integra- tion methods, see Stewart (1979) or van Dijk & Kloek (1978). Naylor & Smith (1982) describe an iterative procedure which makes repeated use of Gauss-Hermite rules over three dimensional cartesian product grids. A general review of progress in this area is given by Smith et al. (1985). In 1983, a research project funded by the SERC was established at the University of Nottingham to extend the ideas of Naylor & Smith to higher dimensions. That project has confirmed that numerical integration in up to 15 dimensions can be carried out routinely, if mixed strategies involving cartesian product rules, spherical rules and Monte Carlo procedures are used. Details are given by Shaw (1985a,b) and will be published elsewhere. Freed from constraints imposed by analytical tractability, a Bayesian analysis is straightforward and extremely flexible. In computing terms, all that is required is a routine for evaluating the likelihood for selected values of the model parameters. Prior distributions need not be restricted to conjugate classes and can more effectively represent prior opinion. The sensitivity of particular inferences to changes in the form of either the model or the prior can be readily investigated. This paper presents two examples of statistical modelling in a Bayesian framework, to demonstrate both the scope of current integration methods and the ease with which a sensitivity analysis can be carried out. Although a Bayesian analysis is commonly portrayed as the revision of subjective beliefs in the light of available data, the idea of a Bayesian sensitivity analysis is not new. It has frequently been advocated that a single data set should motivate many prior to posterior analyses. These arguments are reviewed in Section 2. What is still needed is a practical definition of sensitivity in this context. Consider a graph in which several versions of a marginal posterior density are superimposed. The fact that some of these curves are distinguishable from each other, at the level of resolution employed by the display, does not automatically imply that the margin is sensitive to choice of either the model or the prior. In some circum- stances, gross discrepancies in the tails of the distributions may be unimportant providing the mode or the mean is stable. In other circumstances, the estimation of extreme tail area probabilities may be the sole purpose of the analysis. One possibility

Journal ArticleDOI
TL;DR: In this paper, a robust Student r-type procedure was studied from the Bayesian point of view and an accurate approximation to its posterior distribution was worked out, using a robust student r procedure.
Abstract: A robust Student r-type procedure proposed by Iiku (1980, 1983a) is studied from the Bayesian point of view and an accurate approximation to its posterior distribution is worked out.


Patent
07 Feb 1986
TL;DR: A Bayesian image processing method and apparatus take into account supplementary source information previously ignored by most likely source distribution techniques and produce highly accurate results as mentioned in this paper, which can be used for image classification.
Abstract: A Bayesian image processing method and apparatus take into account supplementary source information previously ignored by most likely source distribution techniques and produce highly accurate results.