scispace - formally typeset
Search or ask a question

Showing papers on "Posterior probability published in 1983"


Journal ArticleDOI
TL;DR: A new theoretical result is presented: the joint probabilistic data association (JPDA) algorithm, in which joint posterior association probabilities are computed for multiple targets (or multiple discrete interfering sources) in Poisson clutter.
Abstract: The problem of associating data with targets in a cluttered multi-target environment is discussed and applied to passive sonar tracking. The probabilistic data association (PDA) method, which is based on computing the posterior probability of each candidate measurement found in a validation gate, assumes that only one real target is present and all other measurements are Poisson-distributed clutter. In this paper, a new theoretical result is presented: the joint probabilistic data association (JPDA) algorithm, in which joint posterior association probabilities are computed for multiple targets (or multiple discrete interfering sources) in Poisson clutter. The algorithm is applied to a passive sonar tracking problem with multiple sensors and targets, in which a target is not fully observable from a single sensor. Targets are modeled with four geographic states, two or more acoustic states, and realistic (i.e., low) probabilities of detection at each sample time. A simulation result is presented for two heavily interfering targets illustrating the dramatic tracking improvements obtained by estimating the targets' states using joint association probabilities.

1,421 citations


Journal ArticleDOI
TL;DR: In this paper, a Bayesian methodology is developed to evaluate parameter uncertainty in catchment models fitted to a hydrologic response such as runoff, the goal being to improve the chance of successful regionalization.
Abstract: A Bayesian methodology is developed to evaluate parameter uncertainty in catchment models fitted to a hydrologic response such as runoff, the goal being to improve the chance of successful regionalization. The catchment model is posed as a nonlinear regression model with stochastic errors possibly being both autocorrelated and heteroscedastic. The end result of this methodology, which may use Box-Cox power transformations and ARMA error models, is the posterior distribution, which summarizes what is known about the catchment model parameters. This can be simplified to a multivariate normal provided a linearization in parameter space is acceptable; means of checking and improving this assumption are discussed. The posterior standard deviations give a direct measure of parameter uncertainty, and study of the posterior correlation matrix can indicate what kinds of data are required to improve the precision of poorly determined parameters. Finally, a case study involving a nine-parameter catchment model fitted to monthly runoff and soil moisture data is presented. It is shown that use of ordinary least squares when its underlying error assumptions are violated gives an erroneous description of parameter uncertainty.

281 citations


Proceedings Article
08 Aug 1983
TL;DR: A new method for calculating the conditional probability of any multi-valued predicate given particular information about the individual case is presented, based on the principle of Maximum Entropy (ME), and gives the most unbiased probability estimate given the available evidence.
Abstract: This paper presents a new method for calculating the conditional probability of any multi-valued predicate given particular information about the individual case. This calculation is based on the principle of Maximum Entropy (ME), sometimes called the principle of least information, and gives the most unbiased probability estimate given the available evidence. Previous methods for computing maximum entropy values shows that they are either very restrictive in the probabilistic information (constraints) they can use or combinatorially explosive. The computational complexity of the new procedure depends on the inter-connectedness of the constraints, but in practical cases it is small. In addition, the maximum entropy method can give a measure of how accurately a calculated conditional probability is known.

182 citations


Journal ArticleDOI
TL;DR: Classic analysis is most misleading when the hypothesis in question is already unlikely to be true, when the baseline event rate is low, or when the observed differences are small.
Abstract: Conventional interpretation of clinical trials relies heavily on the classic p value. The p value, however, represents only a false-positive rate, and does not tell the probability that the investigator's hypothesis is correct, given his observations. This more relevant posterior probability can be quantified by an extension of Bayes' theorem to the analysis of statistical tests, in a manner similar to that already widely used for diagnostic tests. Reanalysis of several published clinical trials according to Bayes' theorem shows several important limitations of classic statistical analysis. Classic analysis is most misleading when the hypothesis in question is already unlikely to be true, when the baseline event rate is low, or when the observed differences are small. In such cases, false-positive and false-negative conclusions occur frequently, even when the study is large, when interpretation is based solely on the p value. These errors can be minimized if revised policies for analysis and reporting of clinical trials are adopted that overcome the known limitations of classic statistical theory with applicable bayesian conventions.

139 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the difficulties associated with the assessment of these conditional probabilities and showed that implicit violations of the probability calculus was observed, and that the consistency of the assessments is affected by the causal/diagnostic and positive/negative relationships of the events.
Abstract: "Public agencies are very keen on amassing statistics-they collect them, add them, raise them to the nth power, take the cube root and prepare wonderful diagrams. But what you must never forget is that every one of those figures comes in the first instance from the village watchman, who just puts down what he damn pleases." Sir Josiah Stamp The assessment of the conditional probabilities of events is useful and needed for forecasting, planning, and decision making. In this paper the difficulties associated with the assessment of these conditional probabilities are examined. The necessary and sufficient conditions that the elicited information on conditional probabilities must satisfy are evaluated against actual assessments in several different controlled settings. A high frequency of implicit violations of the probability calculus was observed. The consistency of the assessments is affected by the causal/diagnostic and positive/negative relationships of the events. Use of a judgmental aid in the form of a joint probability table reduces the number of inconsistent responses significantly. Using the probability axioms, it is also shown that only the first order conditional probabilities need be assessed, as higher order probabilities are robust to the unconditional and first order conditional assessments.

53 citations


Journal ArticleDOI
H. Fluehler1, Andrew P. Grieve1, D. Mandallaz, J. Mau, H.A. Moser1 
TL;DR: The statistical methods required for a Bayesian analysis of bioequivalence are outlined and numerically illustrated and nominees helpful for the calculation of these probabilities are provided.

51 citations



Journal ArticleDOI
TL;DR: In this article, the authors consider sampling from an unknown probability distribution on the integers and show that with a tail-free prior, the posterior distribution is consistent with respect to the point mass.
Abstract: Consider sampling from an unknown probability distribution on the integers. With a tail-free prior, the posterior distribution is consistent. With a mixture of a tail-free prior and a point mass, however, the posterior may be inconsistent. This is likewise true for a countable mixture of tail-free priors. Similar results are given for Dirichlet priors.

34 citations


Journal ArticleDOI
TL;DR: A brief description of Bayesian analysis using Monte Carlo integration is given and an example is presented that illustrates the Bayesian estimation of an asymmetric density and includes a display of distribution and density functions generated from the posterior distribution.
Abstract: A brief description of Bayesian analysis using Monte Carlo integration is given. An example is presented that illustrates the Bayesian estimation of an asymmetric density and includes a display of distribution and density functions generated from the posterior distribution. Other papers are referenced that contain examples that illustrate the power of this approach (a) to handle more accurate formulations of real problems, (b) to analyse difficult models and data for small samples, and (c) to compute predictive distributions and posterior distributions for many functions of the parameters.

33 citations


Journal ArticleDOI
TL;DR: In this article, the state evolves either as a diffusion process or a finitestate Markov process, and the measurement process consists either of a nonlinear function of the state with additive white noise or as a counting process with intensity dependent on the state.
Abstract: Systems are considered where the state evolves either as a diffusion process or as a finitestate Markov process, and the measurement process consists either of a nonlinear function of the state with additive white noise or as a counting process with intensity dependent on the state. Fixed interval smooting is considered, and the first main result obtained expresses a smoothing probability or a probability density symmetrically in terms of forward filtered, reverse-time filterd and unfiltered quantities; an associated result replaces the unfiltered and reverse-time filtered qauantities by a likelihood function. Then stochastic differential equationsare obtained for the evolution of the reverse-time filtered probability or probability density and the reverse-time likelihood function. Lastly, a partial differential equation is obtained linking smoothed and forward filterd probabilities or probability densities; in all instances considered, this equation is not driven by any measurement process. The different...

32 citations


Journal ArticleDOI
TL;DR: In this article, the problem of how to best utilize site and regional flood data to infer the shape parameter of a flood distribution is considered, and the posterior distribution of the power normal parameters expressing what is known about the parameters given site flood data and regional information on λ is derived.
Abstract: The problem of how to best utilize site and regional flood data to infer the shape parameter of a flood distribution is considered. One approach to this problem is given in Bulletin 17B of the U.S. Water Resources Council (1981) for the log-Pearson distribution. Here a lesser known distribution is considered, namely, the power normal which fits flood data as well as the log-Pearson and has a shape parameter denoted by λ derived from a Box-Cox power transformation. The problem of regionalizing λ is considered from an empirical Bayes perspective where site and regional flood data are used to infer λ. The distortive effects of spatial correlation and heterogeneity of site sampling variance of λ are explicitly studied with spatial correlation being found to be of secondary importance. The end product of this analysis is the posterior distribution of the power normal parameters expressing, in probabilistic terms, what is known about the parameters given site flood data and regional information on λ. This distribution can be used to provide the designer with several types of information. The posterior distribution of the T-year flood is derived. The effect of nonlinearity in λ on inference is illustrated. Because uncertainty in λ is explicitly allowed for, the understatement in confidence limits due to fixing λ (analogous to fixing log skew) is avoided. Finally, it is shown how to obtain the marginal flood distribution which can be used to select a design flood with specified exceedance probability.

Journal ArticleDOI
TL;DR: In this paper, the posterior distribution of a small-scale illustrative econometric model is used to compare symmetric simple importance sampling with asymmetric simple important sampling, and numerical results include posterior first and second order moments.
Abstract: The posterior distribution of a small-scale illustrative econometric model is used to compare symmetric simple importance sampling with asymmetric simple importance sampling. The numerical results include posterior first and second order moments, numerical error estimates of the first order moments, posterior modes, univariate marginal posterior densities and bivariate marginal posterior densities plotted in three-dimensional figures.

Journal ArticleDOI
TL;DR: In this article, the authors developed a model for computer-aided underwriting for a major composite insurance company for two classes of commercial business: motor fleet and fire, and tested it on over 500 actual risks presented to branch underwriters.
Abstract: Bayesian models for computer-aided underwriting have been developed for a major composite insurance company for two classes of commercial business: motor fleet and fire. The fire model produces a posterior probability distribution over a discretized dimension of risk, defined as the probability of a loss times the severity of the loss, but operationalized as a rate appropriate for that degree of risk. A separate model is developed for each class of risk (e.g. shops, warehouses). In its generic form, the fire model comprises 10-13 factors (e.g. housekeeping, security) and three possible levels for each factor (e.g. for housekeeping, untidy, average, tidy). A branch underwriter using the model identifies the factor levels appropriate to the building under consideration; the computer then selects the corresponding pre-assessed likelihoods, computes posterior probabilities and expected rates, and displays the rate along with a credibility factor that indicates the relative plausibility of this pattern of factor levels. The models have been tested on over 500 actual risks presented to branch underwriters, and the pure rates generated by the model were found to be at least as good as rates determined for the same risks by experienced underwriters. The motor fleet model gives the predictive distribution of the total claims cost for a fleet. Frequency and size of claims are assumed to be described by a compound Poisson model with parameters that are modified by three to five years' claims experience of the fleet relative to the experience of the whole portfolio of fleets, by the nature of the business carried out by the vehicles in the fleet and by the geographical district in which the fleet usually resides. This model has been tested on over 300 cases drawn from the files and has generated rates considered by head office underwriters to be satisfactory. Both models, which can easily be implemented on microcomputers, demonstrate how the Bayesian approach provides a general and flexible framework within which both hard data and expert judgement can be accommodated. In this sense, these models are among the first in a new generation of underwriting models that extend the capabilities of underwriters.

Journal ArticleDOI
TL;DR: In this article, a nonparametric estimate for the posterior probabilities in the classification problem using multivariate thin plate splines is proposed, which presents a non-pararnetric alternative to logistic discrimination as well as to survival curve estimation.
Abstract: A nonparametric estimate for the posterior probabilities in the classification problem using multivariate thin plate splines is proposed. This method presents a nonpararnetric alternative to logistic discrimination as well as to survival curve estimation. The degree of smoothness of the estimate is determined from the data using generalized crossvalidation.


Journal ArticleDOI
TL;DR: In this paper, a multivariate linear model with missing observations in a nested pattern is discussed, where the predictive density of the missing observations is taken into account in determining the posterior distribution of B and its mean and variance matrix.
Abstract: We discuss the case of the multivariate linear model Y = XB + E with Y an (n × p) matrix, and so on, when there are missing observations in the Y matrix in a so-called nested pattern. We propose an analysis that arises by incorporating the predictive density of the missing observations in determining the posterior distribution of B, and its mean and variance matrix. This involves us with matric-T variables. The resulting analysis is illustrated with some Canadian economic data.

Journal ArticleDOI
TL;DR: In this paper, nominal and six interacting groups of four managers each were given a probabilistic inferential task using the Phillips and Edwards' experimental procedure, and the mean nominal group response for the problem was substantially closer to the Bayesian posterior probability than was the mean interacting group response.
Abstract: Nominal groups behave more like Bayesian information-processors than interacting groups. Six nominal and six interacting groups of four managers each were given a probabilistic inferential task using the Phillips and Edwards' experimental procedure. The mean nominal group response for the problem (.667) was substantially closer to the Bayesian posterior probability (.81) than was the mean interacting group response (.498).

Journal ArticleDOI
TL;DR: In this paper, it was shown that the estimation of exact Bayesian intervals for the reliability of a series system from subsystem test data gives rise to computational difficulties involving severe loss of computing precision as the number of subsystems in the system increases.
Abstract: The determination of exact Bayesian intervals for the reliability of a series system from subsystem test data gives rise to computational difficulties involving severe loss of computing precision as the number of subsystems in the system increases. The end points of Bayesian intervals are percentage points of the posterior distribution and these are shown to be well approximated by Cornish and Fisher expansions when the number of subsystems is small. As the number of subsystems in the system increases even greater accuracy is guaranteed by the asymptotic nature of the expansions. The system posterior distribution function is also shown to be well approximated by an Edgeworth expansion.

01 Jan 1983
TL;DR: A new formula for the probability of a union of events is used to express the failure probability of an n-component system and it is shown that the average value of the estimator over many runs of the algorithm tends to converge quickly to the failure Probability of the system.
Abstract: A new formula for the probability of a union of events is used to express the failure probability of an n-component system. a very simple Monte-Carlo algorithm based on the new probability formula is presented. The input to the algorithm gives the failure probabilities of the n components of the system and a list of the failure sets of the system. The output is an unbiased estimator of the failure probability of the system. We show that the average value of the estimator over many runs of the algorithm tends to converge quickly to the failure Probability of the system. The overall time to estimate the failure probability with high accuracy compares very favorably with the execution times of other methods used for solving this problem.

01 Jan 1983
TL;DR: In this article, the posterior probability that a p-dimensional observation vector originates from one of k normal distributions with identical covariance matrices is investigated. And the validity of various estimators and approximate confidence intervals is investigated by simulation experiments.
Abstract: This paper is devoted to the asymptotic distribution of estimators for the posterior probability that a p-dimensional observation vector originates from one of k normal distributions with identical covariance matrices. The estimators are based on training samples for the k distributions involved. Observation vector and prior probabilities are regarded as given constants. The validity of various estimators and approximate confidence intervals is investigated by simulation experiments.

ReportDOI
01 Sep 1983
TL;DR: In this paper, a nonparametric estimate for the posterior probabilities in the classification problem using multivariate smoothing splines is proposed, which is useful in exploring properties of the data and in presenting them in a way comprehensible to the layman.
Abstract: : A nonparametric estimate for the posterior probabilities in the classification problem using multivariate smoothing splines is proposed. This estimate presents a nonparametric alternative to logistic discrimination and to survival curve estimation. It is useful in exploring properties of the data and in presenting them in a way comprehensible to the layman. The estimate is obtained as the solution to a constrained minimization problem in a reproducing kernel Hilbert space. It is shown that under certain conditions an estimate exists and is unique.


01 Jan 1983
TL;DR: In this paper, the posterior distribution of a small-scale illustrative econometric model is used to compare symmetric simple importance sampling (SSA) with asymmetric simple SSA.
Abstract: textThe posterior distribution of a small-scale illustrative econometric model is used to compare symmetric simple importance sampling with asymmetric simple importance sampling. The numerical results include posterior first and second order moments, numerical error estimates of the first order moments, posterior modes, univariate marginal posterior densities and bivariate marginal posterior densities plotted in three-dimensional figures.

Journal ArticleDOI
TL;DR: This paper focuses on three distinct issues: the form of the Bayesian posterior distribution, the computation of the first moment of this distribution, and the asymptotic behavior of the posterior sequence.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the problem of estimating the number of correct answers in a multiple-choice examination as a problem in posterior probability and found that the value which maximizes the likelihood of indicating t correct answers is approximately one half greater than this expression, a typical value being The dependence of the adjustment on l, n and t is so small that no amendment to marking schemes is proposed.
Abstract: If l alternative responses are provided for each of n equally‐weighted questions in a multiple‐choice examination, and the candidate correctly indicates t answers, the prior probability estimate of the number of answers actually known is given by (lt – n)/(l – 1), a result implicit in most assessment schemes Considered as a problem in posterior probability, the value which maximizes the likelihood of indicating t correct answers is shown to be approximately one half greater than this expression, a typical value being The dependence of the ‘adjustment’ on l, n and t is so small that no amendment to marking schemes is proposed

Journal ArticleDOI
TL;DR: In this paper, the first and second moments of R2 as a function of the size of the model were derived for two extreme cases, viz completely correct and completely incorrect models.
Abstract: With the help of conditional probabilities formulas are derived for the first and second moment of R2 as a function of the size of the model. The formulas are valid in the space group P{\bar 1} for two extreme cases, viz completely correct and completely incorrect models. Incorporation of the observed intensities enables one to obtain accurate a priori estimates of (R2) and σ(R2). The theory agrees very well with simulated experiments.

Book ChapterDOI
01 Jan 1983
TL;DR: In this paper, the authors discuss several Bayesian approaches for estimating the failure rate function when no a priori assumption is made about the parametric family of the underlying life distribution.
Abstract: In this paper we discuss several Bayesian approaches for estimating the failure rate function when no a priori assumption is made about the parametric family of the underlying life distribution. Our results are based on an extension of the existing Bayesian estimates of the cumulative distribution function, which have recently appeared in the literature. In the sequel, we introduce and review the Dirichlet process, the gamma process, and Processes Neutral to the Right, and point out their relevance and usefulness for the problem at hand.


Journal ArticleDOI
01 Jan 1983
TL;DR: The problem of statistical pattern recognition with noisy or imprecise feature measurements is considered and an exact analytical expression is found for the probability of misclassification under this condition, for multiclass multivariate systems.
Abstract: The problem of statistical pattern recognition with noisy or imprecise feature measurements is considered. An exact analytical expression is found for the probability of misclassification under this condition, for multiclass multivariate systems. The probability of error exceeds that of the ideal case for the special case of two classes, the a priori conditional probability density functions are assumed to be normal, along with the two cases of feature measurement error, namely normal and uniform probability density functions. Monotonicity of the misclassification probability with measurement error variance is shown. Numerical results are presented for both cases over a workable range of parameters. The study is useful in practical pattern recognition problems.

Journal ArticleDOI
Xi-Ren Cao1
TL;DR: In this article, the unnormalized conditional probability equation of a one-dimensional linear system is solved by using the Lie algebra associated with the system, and the explicit form of the conditional probability with arbitrary initial distribution is obtained.