scispace - formally typeset
Search or ask a question

Showing papers on "Posterior probability published in 1988"


Book
01 Jan 1988

1,522 citations


Journal ArticleDOI
TL;DR: In this paper, an outlier is defined as an observation with a large random error, generated by the linear model under consideration, and is detected by examining the posterior distribution of the random errors.
Abstract: SUMMARY An approach to detecting outliers in a linear model is developed. An outlier is defined to be an observation with a large random error, generated by the linear model under consideration. Outliers are detected by examining the posterior distribution of the random errors. An augmented residual plot is also suggested as a graphical aid in finding outliers. We propose a precise definition of an outlier in a linear model which appears to lead to simple ways of exploring data for the possibility of outliers. The definition is such that, if the parameters of the model are known, then it is also known which observations are outliers. Alternatively, if the parameters are unknown, the posterior distribution can be used to calculate the posterior probability that any observation is an outlier. In a linear model with normally distributed random errors, Ei, with mean zero and variance a 2we declare the ith observation to be an outlier if IEi I> ko- for some choice of k. The value of k can be chosen so that the prior probability of an outlier is small and thus outliers are observations which are more extreme than is usually expected. Realizations of normally distributed errors of more than about three standard deviations from the mean are certainly surprising, and worth further investigation. Such outlying observations can occur under the assumed model, however, and this should be taken into account when deciding what to do with outliers and in choosing k. Note that ei is the actual realization of the random error, not the usual estimated residual ?i. The problem of outliers is studied and thoroughly reviewed by Barnett & Lewis (1984), Hawkins (1980), Beckman & Cook (1983) and Pettit & Smith (1985). The usual Bayesian approach to outlier detection uses the definition given by Freeman (1980). Freeman defines an outlier to be 'any observation that has not been generated by the mechanism that generated the majority of observations in the data set'. Freeman's definition therefore requires that a model for the generation of outliers be specified and is implemented by, for example, Box & Tiao (1968), Guttman, Dutter & Freeman (1978) and Abraham & Box (1978). Our method differs in that we define outliers as arising from the model under consideration rather than arising from a separate, expanded, model. Our approach is similar to that described by Zellner & Moulton (1985) and is an extension of the philosophy

182 citations


Journal ArticleDOI
Peter Lenk1
TL;DR: In this paper, a generalization of the process derived from a logistic transform of a Gaussian process is proposed to model the common density of an exchangeable sequence of observations.
Abstract: This article models the common density of an exchangeable sequence of observations by a generalization of the process derived from a logistic transform of a Gaussian process. The support of the logistic normal includes all distributions that are absolutely continuous with respect to the dominating measure of the observations. The logistic-normal family is closed in the prior to posterior Bayes analysis, with the observations entering the posterior distribution through the covariance function of the Gaussian process. The covariance of the Gaussian process plays the role of a smoothing kernel. Three features of the model provide a flexible structure for computing the predictive density: (a) The mean of the Gaussian process corresponds to the prior mean of the random density: (b) The prior variance of the Gaussian process controls the influence of the data in the posterior process. As the variance increases, the predictive density has greater fidelity to the data, (c) The prior covariance of the Gau...

127 citations


Journal ArticleDOI
TL;DR: In this paper, a Bayesian procedure is presented for estimating the reliability of a series system of independent binomial subsystems and components, and the posterior distribution of the overall missile-system reliability from which the required estimates are obtained is computed.
Abstract: A Bayesian procedure is presented for estimating the reliability of a series system of independent binomial subsystems and components. The method considers either test or prior data (perhaps both or neither) at the system, subsystem, and component level. Beta prior distributions are assumed throughout. Inconsistent prior judgments are averaged within the simple-to-use procedure. The method is motivated by the following practical problem. It is required to estimate the overall reliability of a certain air-to-air heat-seeking missile system containing five major subsystems with up to nine components per subsystem. The posterior distribution of the overall missile-system reliability from which the required estimates are obtained is computed.

115 citations


Journal ArticleDOI
TL;DR: In this article, a set of unknown normal means (treatment effects, say) {θ1, θ2, …, ǫk } is investigated, and a Bayesian approach is taken, leading to calculation of the posterior probability of H 0 and the posterior probabilities that each mean is the largest, conditional on H 0 being false.
Abstract: A set of unknown normal means (treatment effects, say) {θ1, θ2, …, θk } is to be investigated. Two common questions in analysis of variance and ranking and selection are as follows: (a) What is the strength of evidence against the hypothesis H 0 of equality of means? (b) If H 0 is false, which mean is the largest (or smallest)? A Bayesian approach to the problem is taken, leading to calculation of the posterior probability of H 0 and the posterior probabilities that each mean is the largest, conditional on H 0 being false. A variety of exchangeable, nonexchangeable, informative, and noninformative prior assumptions are considered. Calculations involve, at worst, only low-dimensional numerical integration, in spite of the fact that the dimension k can be arbitrarily large. As an example, Table 1 presents, for each baseball team in the National League in 1984, the highest batting average obtained by any player on the team with at least 150 at bats. The observed batting averages are treated as sampl...

99 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed conjugate prior distributions for the von Mises distribution, which they used to compute a posterior distribution of the location of an emergency transmitter in a downed aircraft.
Abstract: We study the problem of determining the location of an emergency transmitter in a downed aircraft. The observations are bearings read at fixed stations. A Bayesian approach, yielding a posterior map of probable locations, seems reasonable in this situation. We therefore develop conjugate prior distributions for the von Mises distribution, which we use to compute a posterior distribution of the location. An approximation to the posterior distribution yields accurate, rapidly computable answers. A common problem with this kind of data is the possibility that signals will reflect off orographic terrain features, resulting in wild bearings. Such bearings can affect the posterior distribution severely. We develop a sensitivity analysis, based on the idea of predictive distribution, to reject wild bearings. The method, which is based on an asymptotic argument, nonetheless performs well in a small simulation study. When the preceding approximation is used, the sensitivity analysis is practical in terms ...

86 citations


Patent
20 Jul 1988
TL;DR: In this article, a method for adapting the value of a probability of the occurrence of a first of two binary symbols is presented. But the method is based on a deterministic finite state machine (DSM).
Abstract: The present invention relates to computer apparatus and methodology for adapting the value of a probability of the occurrence of a first of two binary symbols which includes (a) maintaining a count of the number k of occurrences of the first symbol; (b) maintaining a total count of the number n of occurrences of all symbols; (c) selecting confidence limits for the probability; and (d) when the probability is outside the confidence limits, effectuating a revision in the value of the probability directed toward restoring confidence in the probability value The number of allowed probabilities is, optionally, less than the total number of possible probabilities given the probability precision Moreover, an approximation is employed which limits the number of probabilities to which a current probability can be changed, thereby enabling the probability adaptation to be implemented as a deterministic finite state machine

81 citations


Journal ArticleDOI
TL;DR: In simple, well-defined cases--for example, screening situations, where the prevalence of disease and the relative consequences of false-positive and -negative classifications can be estimated--a Bayesian decision analysis is appropriate and the optimal discrimination limit is selected, and the total loss is minimized.
Abstract: Evaluation of diagnostic tests by the following principles are reviewed: error rates, scores based on posterior probabilities, and the excess loss considered in a decision theoretic context. Error rates or the complementary non-error rates, specificity and sensitivity, are simple measures which provide a rough indication of the discriminative value. In clinical practice, where a test serves as a decision support together with other information, conversion of test results to posterior probabilities is recommended. An aggregate score of these probabilities expresses the value of the test. Finally, in simple, well-defined cases--for example, screening situations, where the prevalence of disease and the relative consequences of false-positive and -negative classifications can be estimated--a Bayesian decision analysis is appropriate. The optimal discrimination limit is selected, and the total loss is minimized. The likelihood ratio LR(x) plays a central role in probability calculations and in the decision analysis. An example illustrates application of the procedures.

72 citations


Journal ArticleDOI
TL;DR: This approach to incorporate higher order probabilities in systems that reason in beliefs is explored and is found to be epistemologically wanting, although there are important intuitions about beliefs that are captured by it.

53 citations


Proceedings ArticleDOI
11 Apr 1988
TL;DR: It is shown that motion estimation, an ill-posed problem, can be regularized by means of a Bayesian estimation approach and a probabilistic formulation for motion estimation in images and a stochastic algorithm for minimization of the associated objective function.
Abstract: Presents a probabilistic formulation for motion estimation in images and a stochastic algorithm for minimization of the associated objective function. It is shown that motion estimation, an ill-posed problem, can be regularized by means of a Bayesian estimation approach. The unknown motion field is modeled as a two-dimensional vector Markov random field with a certain neighbourhood system. The posterior distribution of the motion field given image observations is then a Gibbs distribution. Maximization of this a posteriori probability to obtain the MAP estimate of the motion field is achieved by simulated annealing. Results of the estimation procedure applied to television sequences with natural motion are presented. >

44 citations


Book ChapterDOI
01 Jan 1988
TL;DR: In this paper, the joint posterior probability that multiple frequencies are present, independent of their amplitude and phase, and the noise level, is calculated for computer simulated data and for real data ranging from magnetic resonance to astronomy to economic cycles.
Abstract: Bayesian spectrum analysis is still in its infancy. It was born when E. T. Jaynes derived the periodogram2 as a sufficient statistic for determining the spectrum of a time sampled data set containing a single stationary frequency. Here we extend that analysis and explicitly calculate the joint posterior probability that multiple frequencies are present, independent of their amplitude and phase, and the noise level. This is then generalized to include other parameters such as decay and chirp. Results are given for computer simulated data and for real data ranging from magnetic resonance to astronomy to economic cycles. We find substantial improvements in resolution over Fourier transform methods.

01 Jan 1988
TL;DR: This dissertation addresses the problem of gathering and fusing information in multi-sensor systems by formalizing a decision-theoretic framework for information-gathering by defining four components: geometric models, sensor observation models, task models, and models of prior information.
Abstract: This dissertation addresses the problem of gathering and fusing information in multi-sensor systems. We formalize a decision-theoretic framework for information-gathering by defining four components: geometric models, sensor observation models, task models, and models of prior information. Geometric models consist of constrained collections of parametric surfaces. Sensor models mathematically describe the relationship between surface parameters and statistically corrupted sensor observations. Task models consist of a function relating geometry to requested task-specific information, and a utility describing how task performance degrades with inaccurate information. Prior information is encoded in a probability distribution over the parameter space. Using game-theoretic techniques, we demonstrate that a commonly used fusion technique, minimum mean square estimation, is not adequate for the class of sensor models described by our framework. This motivates the development of a finite element method for computing an updated posterior distribution from a prior distribution and a sensor observation. The method accounts for sensor model uncertainty and geometric variations needed to match a surface to observed data. We show that the approximation error of the method is stable, and present simulation results for a variety of positioning and shape determination problems. We then construct search procedures based on techniques developed in the field of experimental design. These procedures choose sensor actions which yield the best predicted performance of fusion for a given task. We first derive and discuss the sequential design method that we use; then show how to compute it efficiently, and demonstrate some of its properties through simulation. We also show how this method can be used to automatically stop sampling when the gains of gathering more data are outweighed by the costs. Finally, we describe and demonstrate a working system based on these methods. This system uses controllable visual search and positioning to estimate the size and position parameters of polygonal objects, and the position, size and shape of superellipsoidal objects. We also discuss issues in extending our methods to multiple sensors, dynamically reconfigurable systems, and sensor planning using artificial intelligence methods.

Journal ArticleDOI
TL;DR: Hampel's concept of qualitative robustness (or stability) is applied to estimates of ‘generalized parameters’ (that is, estimates which take values in an abstract metric space) and the incompatibility between robustness and consistency is proved.

Journal ArticleDOI
TL;DR: In this article, a Bayesian nonparametric approach to a (right) censored data problem was proposed, based on three assumptions: (a) the new patients and the previous sample patients are all deemed to be exchangeable with regard to survival time, (b) the posterior prediction rule, in the case of no censoring or ties among (say n) observed survival times, assigns equal probability of 1/(n + 1) to each of the n + 1 open intervals determined by these values.
Abstract: This article considers a Bayesian nonparametric approach to a (right) censored data problem. Although the results are applicable to a wide variety of such problems, including reliability analysis, the discussion centers on medical survival studies. We extend the posterior distribution of percentiles given by Hill (1968) to obtain predictive posterior probabilities for the survival of one or more new patients, using data from other individuals having the same disease and given the same treatment. The analysis hinges on three assumptions: (a) The new patients and the previous sample patients are all deemed to be exchangeable with regard to survival time. (b) The posterior prediction rule, in the case of no censoring or ties among (say n) observed survival times, assigns equal probability of 1/(n + 1) to each of the n + 1 open intervals determined by these values. (c) The censoring mechanisms are “noninformative.” Detailed discussion of these assumptions is presented from a Bayesian point of view. I...

Journal ArticleDOI
TL;DR: This paper analyzes how conditional expectations are sensitive to variations in partitioning and suggests which type of distribution function best represents the random process.
Abstract: The partitioned multi-objective risk method (PMRM) was developed for solving risk-based multi-objective decision making problems Based on the premise that the expected value concept is not sufficient for proper decision making, the PMRM generates a number of conditional expected value functions (or risk functions) by partitioning the probability axis into probability ranges The goal of partitioning the probability axis is to have better information on extreme events for decision making purposes These conditional expectations are dependent on the chosen partitioning points This paper analyzes how conditional expectations are sensitive to variations in partitioning One of the risk functions is a measure of extreme and catastrophic events By using the relationship between this particular risk function and the statistics of extremes, the sensitivity analysis is simplified In many practical applications, it is difficult to determine which type of distribution function best represents the random process Conditional expectations also depend on the choice of distribution, and the impact of this selection is discussed

Journal ArticleDOI
TL;DR: In this paper, the robustness or sensitivity of some posterior criteria to specification of the prior distribution is considered, and the ranges of the posterior mean and the posterior variance are determined.
Abstract: The robustness or sensitivity of some posterior criteria to specification of the prior distribution is considered. We model the uncertainties in an elicited prior, π0, by a ∊-contamination class , and consider the case where the class of contaminations Q = {all probability distributions}. The ranges of the posterior mean and the posterior variance, as the prior distribution varies over р, are determined. Examples involving normal and Poisson likelihoods are given.

Journal ArticleDOI
TL;DR: A two-step estimate-and-maximize (EM)-based (E-step and M-step) iterative algorithm is derived and it is shown that similar results can be obtained for a wide class of parameter estimation problems.
Abstract: The estimation of the parameters of discrete-time autoregressive moving-average (ARMS) processes observed in white noise is considered. A class of time-varying ARMA processes in which the parameter process is the output of a known linear system driven by white Gaussian noise is examined. The maximum a posteriori (MAP) estimator is defined for the trajectory of the parameter's random process. A two-step estimate-and-maximize (EM)-based (E-step and M-step) iterative algorithm is derived. The posterior probability of the parameters is increased in each iteration, and convergence to stationary points of the posterior probability is guaranteed. Each iteration involves two linear systems and is easily implemented. It is shown that similar results can be obtained for a wide class of parameter estimation problems. >

Journal ArticleDOI
T. V. Reeves1
TL;DR: This paper argued that probability is not an objective phenomenon that can be identified with either the configurational properties of sequences, or the dynamic properties of sources that generate sequences, and proposed a notion of probability that is a modification of Laplace's classical enunciation.
Abstract: This paper argues that probability is not an objective phenomenon that can be identified with either the configurational properties of sequences, or the dynamic properties of sources that generate sequences. Instead, it is proposed that probability is a function of subjective as well as objective conditions. This is explained by formulating a notion of probability that is a modification of Laplace's classical enunciation. This definition is then used to explain why probability is strongly associated with disordered sequences, and is also used to throw light on a number of problems in probability theory.

Book ChapterDOI
04 Jul 1988
TL;DR: This work considers decoding an iterated product of parity-check codes which results in a vanishingly small error probability provided the channel signal-to-noise ratio is larger than some threshold.
Abstract: Several successive decodings of cascaded codes become possible in principle without information loss if the decoding task is extended to determine a posterior probability distribution on the codewords. Kullback principle of cross-entropy minimization is considered as a means of implementing it. Its practical use, however, demands some kind of simplification. We propose to look for the posterior distribution in separable form with respect to the information symbols, which leads to decoding output of same form as its input. As an illustration of these ideas, we considered decoding an iterated product of parity-check codes which results in a vanishingly small error probability provided the channel signal-to-noise ratio is larger than some threshold. Interpreting a single linear code as a kind of product of its parity checks, the same ideas lead to a simple and efficient algorithm.

Journal ArticleDOI
TL;DR: In this article, a Hotelling T 2-like quantity is defined, and its cdf is expressed as a linear combination of the cdf's of some F distributions, which enables us to compute desired probabilities exactly using only one-dimensional numerical integrations.
Abstract: We consider the problem of finding the posterior probabilities of ellipsoids for the difference between two multivariate normal means. We do not require the two population covariance matrices to be equal, therefore, our results pertain to the multivariate Behrens—Fisher problem. A Hotelling T 2-like quantity is defined, and its cdf is expressed as a linear combination of the cdf's of some F distributions. This representation enables us to compute desired probabilities exactly using only one-dimensional numerical integrations. The calculation can easily be done with natural conjugate priors as well as with diffuse priors. Our main theoretical result, concerning convolutions of multivariate t distributions, is of some general interest. For the univariate case, we suggest a much simpler proof than that of Ruben (1960). We also give some bounds and approximations for the posterior probabilities of ellipsoids. The computation of these approximations requires much less computer time than the computatio...

Journal ArticleDOI
TL;DR: In this article, a class of quasi-unimmodal prior distributions is considered and compared with other classes of prior distributions compatible with these inputs and the ranges of the posterior probabilities of the Ii and the posterior cdf at the specified prior quantiles were determined.
Abstract: Suppose several quantiles of the prior distribution for θ are specified or, equivalently, the prior probabilities of a partitioning collection of intervals {Ii } are given. In addition, suppose that the prior distribution is assumed to be unimodal. Rather than selecting a single prior distribution to perform a Bayesian analysis, it is of interest to consider the class of all prior distributions compatible with these inputs. For this class and unimodal likelihood functions, the ranges of the posterior probabilities of the Ii and the ranges of the posterior cdf at the specified prior quantiles were determined in Berger and O'Hagan (in press). Unfortunately, calculations with this class can be difficult. Here a similar, much more easily analyzed class of quasiunimodal prior distributions is considered and compared with other classes.

Journal ArticleDOI
TL;DR: In this article, a Bayesian approach for the general formulation of the problem, using a class of priors which involve the unknown ratio explicitly, is presented, and the posterior distribution of the ratio is obtained analytically and its properties are investigated.
Abstract: A variety of statistical problems (e.g., slope-ratio and parallel-line bioassay, calibration, bioequivalence) can be viewed as questions of inference on the ratio of two coefflcients in a suitably constructed linear model. This paper develops a Bayesian approach for the general formulation of the problem, using a class of priors which involve the unknown ratio explicitly. The posterior distribution of the ratio is obtained analytically and its properties are investigated, especially the sensitivity to the choice of the prior. Examples are given of applications to slope-ratio bioassay, comparison of the mean effects of two drugs, and a bioequivalence problem.

Journal ArticleDOI
TL;DR: The Simplicity Postulate as discussed by the authors is a condition imposed by Jeffreys [1948] and [1961] on the so-called prior probability distributions of a test, and it has been interpreted as reasonable degrees of belief.
Abstract: This paper is about the Bayesian theory of inductive inference, and in particular about the status of a condition, called by him the Simplicity Postulate, imposed by Jeffreys [1948] and [1961] on the so-called prior probability distributions. I shall explain what the Simplicity Postulate says presently: first, some background. The context of the discussion will be a set of possible laws hi, ostensibly governing some given domain of phenomena, and a test designed to discriminate between them. The prior probabilities of the hi are here simply their pre-test probabilities; the posterior, or post-test, probability distribution is obtained by combining likelihoods with prior probabilities according to Bayes's Theorem. Posterior probability oc prior probability x likelihood, where the coefficient of proportionality is the prior probability of the test outcome e. The likelihood of hi given e is equal to the probability of e, conditional on hi, and in those cases where hi describes a well-defined statistical model which determines a probability distribution over a set of data-points of which e is one, the likelihood of hi on e, is just the probability assigned e by hi. The prior, and hence also the posterior probabilities, are understood to be relativised to a stock of well-confirmed background theories about the structure of the test, presumed to be neutral between the hi. These probabilities are interpreted by Jeffreys as reasonable degrees of belief. In such circumstances it might seem natural to make the prior probabilities of the hi equal. For reasons which will become apparent shortly, Jeffreys instead stipulates that they should be a decreasing function of the complexity of the hi, where the complexity of a hypothesis is measured by its number of independent adjustable parameters, i.e., the

Journal ArticleDOI
Harrison Prosper1
TL;DR: The posterior probability is calculated and used to calculate point estimates and upper limits for the magnitude of the signal and the issue of the correct assignment of prior probabilities is resolved by invoking an invariance principle proposed by Jaynes.
Abstract: The statistics of small signals masked by a background of imprecisely known magnitude is addressed from a Bayesian viewpoint using a simple statistical model which may be derived from the principle of maximum entropy. The issue of the correct assignment of prior probabilities is resolved by invoking an invariance principle proposed by Jaynes. We calculate the posterior probability and use it to calculate point estimates and upper limits for the magnitude of the signal. The results are applicable to high-energy physics experiments searching for new phenomena. We illustrate this by reanalyzing some published data from a few experiments.

Journal ArticleDOI
TL;DR: In this article, the exponential rate of convergence of posterior distribution around the mode is established by using the generalized Laplace method, and an example is also given in the context of the mode of the posterior distribution.
Abstract: After the observations were observed, the posterior distribution under mild conditions becomes more concentrated in the neighbourhood of the mode of the posterior distribution as sample size n increase. In this paper, the exponential rate of convergence of posterior distribution around the mode is established by using the generalized Laplace method. An example is also given.

Journal ArticleDOI
TL;DR: In this paper, a general probabilistic framework containing the essential mathematical structure of any statistical physical theory is reviewed and enlarged to enable the generalization of some concepts of classical probability theory.
Abstract: A general probabilistic framework containing the essential mathematical structure of any statistical physical theory is reviewed and enlarged to enable the generalization of some concepts of classical probability theory. In particular, generalized conditional probabilities of effects and conditional distributions of observables are introduced and their interpretation is discussed in terms of successive measurements. The existence of generalized conditional distributions is proved, and the relation to M. Ozawa'sa posteriori states is investigated. Examples concerning classical as well as quantum probability are discussed.

Journal ArticleDOI
TL;DR: In this paper, the coverage properties of highest posterior density intervals for three choices of prior are evaluated by simulation, and compared with other solutions, and the simulations suggest that a second moment t approximation combined with the Jeffreys' prior for the bivariate distribution provides intervals that are quite well calibrated, in the sense of having approximate or slightly conservative coverage for a wide range of values of the underlying parameters.

Journal ArticleDOI
TL;DR: In this paper, the problem of model choice and inference for the size of a closed animal population is considered from a Bayesian viewpoint using data obtained by a multiple capture-recapture process with trap response.
Abstract: Using data obtained by a multiple capture-recapture process with trap response, the problem of model choice and inference for the size of a closed animal population is considered from a Bayesian viewpoint. Four different models are discussed and for some estimators and tests developed there are no competitors from the classical sampling approach. A variety ofprior structures are considered with the purpose of studying the influence of the priors chosen on the posterior distribution. A special prior structure which takes into consideration the possible correlation between capture and recapture probabilities is also analyzed

Journal ArticleDOI
TL;DR: In this paper, a general method for estimating kinetic parameters in polymerization reactions using Monte Carlo simulation to represent the models of the reactions is developed, and the procedure is a Bayesian one in which a posterior probability density surface (PPDS) is calculated for points on a grid in the parameter space.
Abstract: A general method for estimating kinetic parameters in polymerization reactions using Monte Carlo simulation to represent the models of the reactions is developed. From a statistical point of view, the procedure is a Bayesian one in which a posterior probability density surface (PPDS) is calculated for points on a grid in the parameter space. A smoothing function is fitted to the PPDS, then a posterior probability region, which is similar to a confidence region, is calculated for the parameters. An application to a relatively trivial example, the Mayo–Lewis copolymerization model is shown in detail. Many other potential applications are suggested.

Journal Article
TL;DR: Using genetic marker data, a general methodology for estimating genetic relationships between a set of individuals is developed and results indicate that with currently available markers a "true" father may be reliably distinguished from any other genetic relationship to the child and that with a reasonable number of markers one can often discriminate between an unrelated individual and one with a second-degree relationship to
Abstract: Using genetic marker data, we have developed a general methodology for estimating genetic relationships between a set of individuals. The purpose of this paper is to illustrate the practical utility of these methods as applied to the problem of paternity testing. Bayesian methods are used to compute the posterior probability distribution of the genetic relationship parameters. Use of an interval-estimation approach rather than a hypothesis-testing one avoids the problem of the specification of an appropriate null hypothesis in calculating the probability of paternity. Monte Carlo methods are used to evaluate the utility of two sets of genetic markers in obtaining suitably precise estimates of genetic relationship as well as the effect of the prior distribution chosen. Results indicate that with currently available markers a "true" father may be reliably distinguished from any other genetic relationship to the child and that with a reasonable number of markers one can often discriminate between an unrelated individual and one with a second-degree relationship to the child.