scispace - formally typeset
Search or ask a question

Showing papers on "Bayesian probability published in 1978"



Journal ArticleDOI
TL;DR: A model-dependent definition of a similarity inatrix is proposed and estimates based on this matrix are justified in a decision-theoretic framework.
Abstract: A parametric model for partitioning individuals into mutually exclusive groups is given. A Bayesian analysis is applied and a loss structure imposed. A model-dependent definition of a similarity inatrix is proposed and estimates based on this matrix are justified in a decision-theoretic framework. Some existing cluster analysis techniques are derived as special limiting cases. The results of the procedure applied to two data sets are compared with other analyses.

283 citations


Journal ArticleDOI
TL;DR: In this paper, the consistency of maximum likelihood and related Bayesian estimates for a general class of observation sequences is treated, following a result by P. E. Caines, who interpreted the condition for consistency in terms of the statistics associated with linear systems driven by white Gaussian inputs, to establish a verifiable condition for the identifiability of such systems on finite sets of mathematical representations.
Abstract: The consistency of maximum likelihood and related Bayesian estimates for a general class of observation sequences is treated, following a result by P. E. Caines. The condition for consistency is then interpreted in terms of the statistics associated with linear systems driven by white Gaussian inputs, to establish a verifiable condition for the identifiability of such systems on finite sets of mathematical representations.

79 citations


Journal ArticleDOI
TL;DR: In this article, a particular form of classification problem is considered and a "quasi-Bayes" approximate solution requiring minimal computation is motivated and defined, and convergence properties are established and a numerical illustration provided.
Abstract: SUMMARY Coherent Bayes sequential learning and classification procedures are often useless in practice because of ever-increasing computational requirements. On the other hand, computationally feasible procedures may not resemble the coherent solution, nor guarantee consistent learning and classification. In this paper, a particular form of classification problem is considered and a "quasi-Bayes" approximate solution requiring minimal computation is motivated and defined. Convergence properties are established and a numerical illustration provided.

78 citations


Journal ArticleDOI
TL;DR: In this article, subject to fictitious criminal case material were found to overestimate the probabilities of compound as compared with component events and an analogy to bayesian cascaded inference behavior was discussed.
Abstract: Subjects presented with fictitious criminal case material were found to overestimate the probabilities of compound as compared with component events. An analogy to bayesian cascaded inference behaviour was discussed. Results of functional measurement and correlational procedures suggested a three-strategy model of compound probability estimation, applicable where there are two component events. According to the model, subjects base their choice of an information processing strategy on the larger and smaller component probability assessments and on a criterion value for each.

72 citations


Journal ArticleDOI
TL;DR: In this paper, a general theorem is proven which describes the asymptotic distribution of maximum likelihood estimates subject to identifiability constraints, and a technique is described for displaying Bayesian conditional credibility regions for any sample size.
Abstract: Techniques are developed for surrounding each of the points in a multidimensional scaling solution with a region which will contain the population point with some level of confidence. Bayesian credibility regions are also discussed. A general theorem is proven which describes the asymptotic distribution of maximum likelihood estimates subject to identifiability constraints. This theorem is applied to a number of models to display asymptotic variance-covariance matrices for coordinate estimates under different rotational constraints. A technique is described for displaying Bayesian conditional credibility regions for any sample size.

69 citations


Journal ArticleDOI
TL;DR: In this article, it is shown that the existence or magnitude of such violations of either of the assumptions of conditional independence and reliability of the data is often unknown to the processor, and that conservatism may be interpreted as a desirable mechanism that compensates for the overconfidence inherent in the inference situation.
Abstract: The phenomenon of conservatism in human Bayesian probability revision is examined for possible adaptive advantages. It is proved that violation of the assumption of conditional independence of data that are not taken into account is likely to result in posterior odds which are too extreme. In addition, any uncertainty in the reliability of the data implies that a Bayesian processor acting on the assumption that the data are reliable will overestimate posterior odds. It is argued that violations of either of the assumptions of conditional independence and reliability of the data are very common. Because of the nature of most inference situations, the existence or magnitude of such violations is often unknown to the processor. Hence conservatism may be interpreted as a desirable mechanism that compensates for the overconfidence inherent in the inference situation. Finally, it is shown that several findings in the Bayesian paradigm may be regarded as outcomes of a process that behaves optimally in view of the violation of these two assumptions.

66 citations


Journal ArticleDOI
TL;DR: The results of Monte Carlo experiments indicate that an α-level substantially larger than that normally used may be appropriate in the Durbin-Watson test, and the Bayesian estimator performed better than all preliminary test estimates in terms of MSE.

61 citations


Journal ArticleDOI
TL;DR: The problem of estimating density functions using data from different distributions, and a mixture of them, is considered and maximum likelihood and Bayesian parametric techniques are summarized and various approaches using distribution‐free kernel methods are expounded.
Abstract: The problem of estimating density functions using data from different distributions, and a mixture of them, is considered. Maximum likelihood and Bayesian parametric techniques are summarized and various approaches using distribution‐free kernel methods are expounded. A comparative study is made using the halibut data of Hosmer (1973) and the problem of incomplete data is briefly discussed.

57 citations


Journal ArticleDOI
TL;DR: In this paper, a Bayesian approach is adopted to make inferences about the parameters of a linear model in the possible presence of one or more spurious observations, which is illustrated by analysing a classical set of data.
Abstract: A Bayesian approach is adopted here to make inferences about the parameters of a linear model in the possible presence of one or more spurious observations. The method proposed is illustrated by analysing a classical set of data.

54 citations


Book
01 Jan 1978
TL;DR: In this article, the authors propose to use pure significance tests, distribution-free and randomization tests, and point estimation to test whether a null hypothesis is a composite null hypothesis.
Abstract: 1. Introduction.- Summary.- 2. Some general concepts.- Summary.- Problems and solutions.- 3. Pure significance tests.- Summary.- Problems and solutions.- 4. Significance tests: simple null hypotheses.- Summary.- Problems and solutions.- 5. Significance tests: composite null hypotheses.- Summary.- Problems and solutions.- 6. Distribution-free and randomization tests.- Summary.- Problems and solutions.- 7. Interval estimation.- Summary.- Problems and solutions.- 8. Point estimation.- Summary.- Problems and solutions.- 9. Asymptotic theory.- Summary.- Problems and solutions.- 10. Bayesian methods.- Summary.- Problems and solutions.- 11. Decision theory.- Summary.- Problems and solutions.- References.- Author Index.

Journal ArticleDOI
TL;DR: It would appear that the peaking behavior of practical classifiers is caused principally by their nonoptimal use of the features, which contradicts previous interpretations of Hughes' model.
Abstract: Even with a finite set of training samples, the performance of a Bayesian classifier can not be degraded by increasing the number of features, as long as the old features are recoverable from the new features. This is true even for the general Bayesian classifiers investigated by qq Hughes, a result which contradicts previous interpretations of Hughes' model. The reasons for these difficulties are discussed. It would appear that the peaking behavior of practical classifiers is caused principally by their nonoptimal use of the features.

Journal ArticleDOI
TL;DR: A reliability analysis is described in which sample moments are used to transform the design parameters to a space in which they have a multivariate normal distribution, and to generate failure risk estimates based on regions of estimated probability content or on statistical tolerance regions in the normal parameter space.

Journal ArticleDOI
01 Jan 1978
TL;DR: In this article, the asymptotic behavior of a Bayes optimal adaptive estimation scheme for a linear, discrete-time system with interrupted observations is investigated and the interrupted observation mechanism is expressed in terms of a stationary two-state Markov chain.
Abstract: The asymptotic behavior of a Bayes optimal adaptive estimation scheme for a linear, discrete-time system with interrupted observations is investigated. The interrupted observation mechanism is expressed in terms of a stationary two-state Markov chain. The transition probability matrix is unknown and can take values only from a finite set.

Journal ArticleDOI
TL;DR: In this article, the probability density function for the number of insects in a grain bulk, as indicated by a sample, was derived using the Bayesian method with a uniform prior, and the p.d.
Abstract: The probability density function for the number of insects in a grain bulk, as indicated by a sample, is derived using the Bayesian method with a uniform prior. The p.d.f. is negative binomial and is shown to be approximately chi-square for small samples. Sampling procedures are discussed.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the common and unique variance estimates produced by Martin and McDonald's Bayesian estimation procedure for the unrestricted common factor model have a predictable sum which is always greater than the maximum likelihood estimate of the total variance.
Abstract: It is shown that the common and unique variance estimates produced by Martin & McDonald's Bayesian estimation procedure for the unrestricted common factor model have a predictable sum which is always greater than the maximum likelihood estimate of the total variance. This fact is used to justify a suggested simple alternative method of specifying the Bayesian parameters required by the procedure.



Journal Article
TL;DR: A computer algorithm for the identification of multiform ectopic ventricular complexes in 24-hour ambulatory electrocardiographic recordings is described, which establishes regions, based on R-R interval and ST-segment slope, in a two-dimensional probability space based on the fit of each ventricular complex.
Abstract: A computer algorithm for the identification of multiform ectopic ventricular complexes in 24-hour ambulatory electrocardiographic recordings is described. The clustering technique established regions, based on R-R interval and ST-segment slope, in a two-dimensional probability space based on the fit of each ventricular complex. The boundaries of any overlapping regions are analyzed using a Bayesian decision rule to minimize misclassification.

Dissertation
01 Nov 1978
TL;DR: In this article, the authors describe a new approach to steady-state forecasting models based on Bayesian principles and information theory, which extends beyond the constraints of normality and linearity required in all existing forecasting methods.
Abstract: This thesis describes a new approach to steady-state forecasting models based on Bayesian principles and Information Theory. Shannon's entropy function and Jaynes' principle of maximum entropy are the essen­tial results borrowed from Information Theory and are extensively used in the model formulation. The Bayesian Entropy Forecasting (BEF) models obtained in this way extend beyond the constraints of normality and linearity required in all existing forecasting methods. In this sense, it reduces in the normal case to the well known Harrison and Stevens steady-state model. Examples of such models are presented, including the Poisson-gamma process, the Binomial-Beta process and the Truncated Normal process. For all of these, numerical applications using real and simulated data are shown, including further analyses of epidemic data of Cliff et al, (1975).

Journal ArticleDOI
TL;DR: In this article, the authors use the Bayesian predictive density function to constuct a sequential test that is based on some statistic of a future sample, and investigate the performance of such a test for exponentially distributed lifetimes.
Abstract: Sequential tests that are based on some unobservable parameter have been well investigated using both classical and Bayesian approaches; see, for example, Wald (1947) and Barnett (1972) In this paper we use the Bayesian predictive density function to constuct a sequential test that is based on some statistic of a future sample, and investigate the performance of such a test for exponentially distributed lifetimes

Book ChapterDOI
01 Jan 1978
TL;DR: In this article, the general nature of conventional mathematical statistics is discussed and the relevance of parametric theory to the finite population inference problem is questioned, and the role of labeling is considered, and some work of Hartley and Rao discussed.
Abstract: In this essay the general nature of conventional mathematical statistics is discussed. It is suggested that Neyman–Pearson–Wald decision theory is ineffective and that the making of terminal decisions must involve some sort of Bayesian process. Bayesian theory is, however, deficient with respect to obtaining of a prior distribution which seems to be a data analysis problem and to be the basic inference problem in a decision context. The relevance of parametric theory to the finite population inference problem is questioned. It is considered that intrinsic aspects of the logic of inference are given by the cases of populations of size one and two, and these are discussed. The role of labeling is considered, and some work of Hartley and Rao discussed. It is suggested that there are difficulties in unequal probability sampling with respect to the ultimate inferences, that is, beyond the matters of estimation and variance of estimators. It is suggested that admissibility theory is ineffective. Absence of attention to pivotality is deplored.

Journal ArticleDOI
TL;DR: The interpretation of the subjective Bayesian approach to statistical inference is examined, and an alternative approach in which the 'probabilities' that the authors assign to the models are regarded as in themselves uninterpretable, but nevertheless capable of giving rise to a predictive distribution that can be interpreted in a degree of belief sense is suggested.
Abstract: This paper examines the interpretation of the subjective Bayesian approach to statistical inference. In section z, I outline the structure of the Bayesian approach. In sections 2 and 3, I argue that it is difficult to assign degrees of belief to probability models, as seems to be required in a subjective Bayesian argument. I then propose two ways out of this difficulty, and examine their implications for current Bayesian practice: (i) First, I suggest that we should link our models, which are to be regarded as models for our degrees of belief, to subsets of the set of all possible data sets that could arise from the cases in which we envisage making predictions. Degrees of belief are then assigned to the subsets (sections 4-6). (ii) In view of some difficulties with interpretation (i), I suggest an alternative approach in which we regard the 'probabilities' that we assign to the models as in themselves uninterpretable, but nevertheless capable of giving rise to a predictive distribution that can be interpreted in a degree of belief sense (sections 7 and 8).

Journal ArticleDOI
TL;DR: In this paper, the predict ion problem has been viewed mostly in two different ways, namely, the Bayes approach with poster ior d iord i s t r i b u t i c s t i o n s and the exact version of the problem where more than a pair of samples are used.
Abstract: 1 . Introduction Predict ion problem, which i s r e c e i v i n g much a t t e n t i o n r e c e n t l y , has been viewed mostly in two d i r e c t i o n s . One i s .the c l a s s i c a l approach based on the independence of s t a t i s t i c s and t h e i r exact d i s t r i b u t i o n s . Such i s the case in Lawless [ 9 ] , Paulkenberry [8], Kaminsky, Luks and Nelson [7 J and a lso in Lingappaiah [10] , [i 1] . But, another method, i s the Bayes approach with poster ior d i s t r i b u t i o n s and su i tab le p r i o r s . Such works are seen i n Bancroft and Dunsmore [ l ] , Aitcheson and Dunsmore [2] and Dunsmore [3] , [4] . Our development here i s based on l a s t of these r e s u l t s Dunsmore [4] and Lingappaiah [ 1 1 ] . Our main motivation, here , i s about what can be done where more than a pair of samples are a v a i l a b l e . What we have done here i s to consider the poster ior d i s t r i b u t i o n at a c e r t a i n stage as the prior f o r the next s t a g e , on the l i n e s of Khan [12] and in so doing, we have developed the predict ive d i s t r i b u t i o n f o r an order s t a t i s t i c at the sth stage ((s+1)th sample) and a lso f o r the d i f f e r e n c e of two s t a t i s t i c s at t h i s s tage . We have discussed the variance in each of these two cases in r e l a t i o n to number of s t a g e s . Also , we have evaluated the probabi l i ty i n t e g r a l f o r both the s i t u a t i o n s and p a r t i c u l a r cases are considered f o r i l l u s t r a t i o n s .

Journal ArticleDOI
P. W. Jones1
TL;DR: In this article, a fixed sample size and sequential procedures are considered for the estimation of the unknown parameter v of a uniform distribution over (0, v) under several loss functions.
Abstract: BAYESian fixed sample size and sequential procedures are considered for the estimation of the unknown parameter v of a uniform distribution over (0, v), under several loss functions.

Journal ArticleDOI
TL;DR: In this article, a method of dealing with partiallyresponded-to questionnaires in survey research which utilizes nonparametric Bayesian discriminant analysis to predict missing responses based on profiles of the partial and the complete respondents is described.
Abstract: This article describes a method of dealing with partially-responded-to questionnaires in survey research which utilizes nonparametric Bayesian discriminant analysis to predict missing responses based on “profiles” of the partial and the “complete” respondents. Some background of nonresponse bias in survey research is discussed, and two other methods of dealing with partial nonresponse bias are briefly criticized. The Bayesian method is detailed along with its statistic, and limitations on the use and effectiveness of all the methods are considered.



01 Jun 1978
TL;DR: In this article, a number of statistical techniques useful in solving search problems are presented, such as Bayesian, minimax, and maximum likelihood inferential techniques in the estimation of the position of a moving target during a search.
Abstract: : This work consists of the development of a number of statistical techniques useful in solving search problems. Among these are the employment of Bayesian, minimax, and maximum likelihood inferential techniques in the estimation of the position of a moving target during a search. An application of potential theory to search problems is also considered.

Journal ArticleDOI
TL;DR: In this paper, a Bayesian analysis of multiplicative treatment effects is given for a normally distributed population, where the two samples taken, one for each treatment, are used to make inferences on the multiplicative effect.
Abstract: A Bayesian analysis of multiplicative treatment effects is given for a normally distributed population. The two samples taken, one for each treatment, are used to make inferences on the multiplicative effect. An analysis of the general problem with uneven sample sizes and prior information is rather untractable, which leads to the use of numerical integration methods. A special case for which analytical results are available is discussed. Finally, the method suggested in this paper is compared with an approach utilizing a logarithmic transformation.