scispace - formally typeset
Search or ask a question

Showing papers on "Bayesian probability published in 1970"



Journal ArticleDOI
TL;DR: The Bayesian theory for testing a sharp hypothesis, defined by fixed values of parameters, is presented in general terms in this article, where an arbitrary positive prior probability is attached to the hypothesis and the ratio of posterior to prior odds for the hypothesis is given by the weighted likelihood ratio.
Abstract: The Bayesian theory for testing a sharp hypothesis, defined by fixed values of parameters, is here presented in general terms Arbitrary positive prior probability is attached to the hypothesis The ratio of posterior to prior odds for the hypothesis is given by the weighted likelihood ratio, shown here to equal Leonard J Savage's (1963) ratio of a posterior to a prior density (221) This Bayesian approach to hypothesis testing was suggested by Jeffreys (1948), Savage (1959), (1961), Lindley (1961), and Good (1950), (1965), but obscured some what by approximations and unique choices of prior distributions This Bayesian theory is distinct from that of Lindley (1965) and that of Dickey (1967a) Applications are given to hypotheses about multinomial means, for example, equality of two binomial probabilities A new test is presented for the order of a finite-state Markov chain

199 citations



Journal ArticleDOI
14 Mar 1970-Nature
TL;DR: It is regrettable that Edwards's interesting article1, supporting the likelihood and prior likelihood concepts, did not point out the specific criticisms of likelihood concepts that seem to dissuade most theoretical and applied statisticians from adopting them.
Abstract: IT is regrettable that Edwards's interesting article1, supporting the likelihood and prior likelihood concepts, did not point out the specific criticisms of likelihood (and Bayesian) concepts that seem to dissuade most theoretical and applied statisticians from adopting them. As one whom Edwards particularly credits with having “analysed in depth … some attractive properties” of the likelihood concept, I must point out that I am not now among the “modern exponents” of the likelihood concept. Further, after suggesting that the notion of prior likelihood was plausible as an extension or analogue of the usual likelihood concept (ref. 2, p. 200), I have pursued the matter through further consideration and rejection of both the likelihood concept and various proposed formalizations of prior information and opinion (including prior likelihood). I regret not having expressed my developing views in any formal publication between 1962 and late 1969 (just after ref. 1 appeared). My present views have now, however, been published in an expository but critical article (ref. 3, see also ref. 4), and so my comments here will be restricted to several specific points that Edwards raised.

49 citations


Journal ArticleDOI
TL;DR: In a single-cue probability learning (SPL) experiment using tasks with scaled cue and criterion variables, the relative consistency of the inference behavior of the subjects was found to be the same for the three levels of cue validity 45, 70, and 90, but the subjects were found to make more extreme inferences at the lower levels of validity than at higher levels.

30 citations


Journal ArticleDOI
TL;DR: In this article, at various points in a two-choice probability learning experiment, Ss were interrupted and asked to estimate the probability of the most frequent of the two stimulus events, compared with the proportion of trials on which other Ss predicted the events.
Abstract: : At various points in a two-choice probability learning experiment, Ss were interrupted and asked to estimate the probability of the most frequent of the two stimulus events. The Ss' estimates were compared with the proportion of trials on which other Ss predicted the events. The estimates change as a function of training in a manner consistent with a simple Bayesian revision model. (Author)

13 citations


Journal ArticleDOI
TL;DR: In this paper a Sequential Bayes Procedure (SBP) for demonstrating that θ exceeds θ1 is presented, which differs from the classical procedure in the sense that a prior distribution is assumed on the parameter θ, calling for a Bayesian approach.
Abstract: A common problem in life testing is to demonstrate that the mean time to failure, θ, exceeds some minimum acceptable value, say θ1, with a given confidence coefficient γ. When this is true, it is said that “θ1 has been demonstrated with a confidence γ”. In this paper a Sequential Bayes Procedure (SBP) for demonstrating (by means of. a probability statement) that θ exceeds θ1 is presented. The SBP differs from the classical procedure in the sense that a prior distribution is assumed on the parameter θ, calling for a Bayesian approach. The procedure is based on the sequence of statistics.

13 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated S s' perceptions of the likelihood of several interpersonal relationships in hypothetical two-and three-person groups, where S s were asked to judge the probability that two people were joined by a specific interpersonal relationship, given various amounts and kinds of prior information regarding the incidence of that relationship in the group.

12 citations



Journal ArticleDOI

8 citations


Journal ArticleDOI
TL;DR: The results of the study strongly support the premise that the auditor can obtain superior results from the Bayesian model in estimating unknown quantities as opposed to those obtained through either the non-statistical approach or the classical statistical approach.
Abstract: The basic purpose of this study was the investigation of the usefulness of Bayesian statistical techniques to problems of estimation in the field of auditing. By using subjective probabilities the auditor can explicitly bring prior knowledge to bear on his problem by incorporating his feelings in the sampling process and thus obtain an efficient method by which refined estimates can be obtained. The results of the study strongly support the premise that the auditor can obtain superior results from the Bayesian model in estimating unknown quantities as opposed to those obtained through either the non-statistical approach or the classical statistical approach.


Journal ArticleDOI
TL;DR: The results indicate that this relationship depends somewhat on the parameters of the distribution and loss function; on the average, the intuitive estimates were quite close to the optimal estimates despite the complexity of the decision-making problem.


Journal ArticleDOI
TL;DR: The present study considers the way in which estimates of age are revised as further information is presented and compares subjects' revisions with a Bayesian model and the diagnostic impact of the initial item of information.


Journal ArticleDOI
01 Jan 1970
TL;DR: This research paper presents a packet header anomaly detection approach by using Bayesian belief network which is a probabilistic machine learning model which gives an outstanding result determining a very high detection rate of reliability and precision.
Abstract: This research paper presents a packet header anomaly detection approach by using Bayesian belief network which is a probabilistic machine learning model. A DARPA dataset was tested for the performance evaluation in the packet header anomaly detection or DoS intrusion-type. In this respect, the proposed method using Bayesian network gives an outstanding result determining a very high detection rate of reliability at 99.04 % and precision at 97.33 % on average.

01 Feb 1970
TL;DR: In this article, a sensitivity index is developed which measures the performance loss that occurs when the receiver is designed to be optimal with respect to the given a priori density g(.) but operates in an environment in which the a priora density h(.) is considered to hold.
Abstract: : Receiver design and performance from a Bayesian viewpoint depend upon a priori specification whenever unknown parameters are encountered in the detection situation; any available information is expressed in the form of an a priori density. A sensitivity index is developed which measures the performance loss that occurs when the receiver is designed to be optimal with respect to the given a priori density g(.) but operates in an environment in which the a priori density h(.) is considered to hold. A comparison of receiver performance is made for the composite hypothesis situation. The Bayesian approach is contrasted to the classical approach. Initial indications were that classical statistics could be closely linked to Bayesian philosophy since analysis according to either mode ofter led to the same receiver. It appeared that many of the classical tests could be generated from a Bayesian viewpoint by an appropriate assignment of the a priori density. Investigation revealed that this was not true in general; and the conclusion is drawn that the Bayesian approach is uniquely distinct from the classical approach. The externally sensed parameter receiver is reviewed and its receiver operating characteristic is evaluated for several examples not considered before. Receiver design via numerical integration techniques is demonstrated to be feasible for composite hypothesis situations previously considered too complex to solve. Receiver design via estimation techniques is considered justifiable in case optimal procedures are too complex. (Author)

Journal ArticleDOI
TL;DR: In this article, a Bayesian and a frequentist solution to the problem is proposed, where the Bayesian first specifies a prior distribution 1r(8) for 8 and combines this with the likelihood, arriving at a posterior distribution of 8 and hence to the appropriate marginal distribution of 0 from which he makes a probability statement of the form
Abstract: SUPPOSE that we have a random samples of observations, s = (xl> . .. , Xn), on a continuous random variable with density function f(x, 8) depending on k unknown parameters 8 = (81 , ... , 8k). Our main object is to use the sample s to make probability statements relevant to the values of some k' ( < k) components of 8 which, with no loss of generality, we take to be (8~> ... , 8k·) = 0 (say). Consider the Bayesian and frequentist solutions to this problem. The Bayesian first specifies a prior distribution 1r(8) for 8 and combining this with the likelihood, arrives at a posterior distribution of 8 and hence to the appropriate marginal distribution of 0 from which he makes a probability statement of the form


Journal ArticleDOI
TL;DR: The feasibility and efficacy of a Bayesian method, due to Lindley, for estimating regressions in m groups is studied by application to data from the Law School Admissions Test program and the Comparative Guidance program.
Abstract: The feasibility and efficacy of a Bayesian method, due to Lindley, for estimating regressions in m groups is studied by application to data from the Law School Admissions Test program and the Comparative Guidance program. Easily computable asymptotic solutions to the Lindley equations are provided and shown to approximate well the full Bayesian solution in some situations. Numerical investigations are described that lead to the adoption of efficient starting values and an efficient sequencing for the iterative solution of the Lindley equations. The advantages of the Bayesian method over more conventional procedures are discussed. Difficulties encountered by Lindley in the statement of one of his assumptions of independence are investigated and a resolution is found. The validities of other assumptions are checked. Problems arising in the extension of Lindley's method to the multiple correlation case are discussed and some insight is gained through the use of principal component methods.

DOI
01 Jan 1970
TL;DR: This paper presents a solution of the estimation problem by using prior knowledge in Bayesian theory with a loss function, and provides an attractive approach to overcoming the limited data problem.
Abstract: Bayesian Estimate of System Reliability by Naresh Shah, Master of Science Utah State University, 1970 Major Professor: Mr. R. V. Canfield Department: Applied Statistics A Bayesian estimate of reliability for each component in . the system of n-components, each exponentially distributed, is developed which utilizes the basic notion of loss in estimation theory. Here we assume that each component is independently dis­ tributed. In reliability estimation, the loss associated with over­ estimation is usually greater than the loss associated with under­ estimation; and hence loss function can be a very useful tool. The prior distribution and loss function of reliability considered in this paper are flexible to be compatible with other situations in which reliability estimates are required. When the loss function is symrretric and no prior information is at hand, the resulting estimate is approximately the minimum variance unbiased estimate of reliability. (39 pages) INTRODUCTION Reliability is the probability that a device will perform its purpose adequately for the period of time intended under the operating conditions encountered. Generally, underestimation of reliability results in the unnecessary expense of redundancy or other measures to bring the reliability up to a desired level. Overestimation of reliability results in unwarranted confidence which may lead to total mission failure. In practice, the loss incurred by underestimation of reliability is usually less than the loss incurred when reliability is overestimated. For this reason, lower confidence bounds have been used as estimates of reliability. This approach neglects the basic notion of a loss function in decision theory (Lindgren, 1968). Consider a system of n-components, subjected to an environ­ mental life test. Due to time or budget limitations, it may be necessary to terminate testing after a limited number of failures or after a certain amount of time for a particular component. The engineer is required to establish the estimate of reliability for each component with this limited amount of data._ A great deal of knowledge may be available through past experience of similar items. Bayesian theory permits the incorporation of this prior information into the reliability estimate and thus provides an attractive approach to overcoming the limited data problem. This paper presents a solution of the estimation problem by using prior knowledge in Bayesian theory with a loss function. A loss 2 function is described which permits weighting of loss to reflect any attitude toward overestimation. The exponential model of reliability is used. Thus, the reliability R(e,t) is given by R(e,t) = e -et ( 1) where e is the failure rate and t is the fixed mission time. It was observed that when loss function is symmetric and the prior distribution of failure rate is uniform (i.e., no prior knowledge), then the re­ sulting reliability estimate is approximately the minimum variance unbiased estimate (Pugh, 1963). To briefly restate the postulate of Bayes, assume that we know a certain conditional density function, f(Zle) and we desire to know h ( e I Z) . We may write : h(e!Z) = f{Z,e) = f{Z e) (e) f(Z) f{Zle)g(e) de e {2) where the integration is performed over a 11 e, to give the margi na 1 density of Z. The only unknown quantity in Equation (2) is a prior distribution of e, g(e). So, if we know prior distribution of e, g(e), then we obtained h(elZ), which is known as the posterior distribution. We define our loss function as i(e ,e), where e is estimate a a for e. In this case, Bayes posterior loss will be defined as B{e ) = E[t(e ,e)J a a = i{e ,e)h{elz) de a 3

Journal ArticleDOI
TL;DR: The more important and complex the decisions were perceived to be, the more conservative the subjects were when compared to “optimal” Bayesian values (p < .01); the differences were more pronounced as the number of alternatives to be considered increased.
Abstract: A 2 × 3 design was used to explore the effects of complexity and importance of decision on conservatism in probabilistic inference. 120 undergraduate males considered either two or three alternatives for one of three problems differing in perceived importance. Upon receiving certain information, subjects indicated which of the several alternative explanations for their problem was more or most probable. These probability statements were compared with “optimal” probabilities arrived at using Bayes' theorem. The more important and complex the decisions were perceived to be, the more conservative the subjects were when compared to “optimal” Bayesian values (p < .01). The differences were more pronounced as the number of alternatives to be considered increased (p < .01). Seven individual difference variables measuring adequacy in processing information were unrelated to conservatism. A process analysis suggested that strongly held initial opinions can limit the judged relevance of subsequent information and, in turn, affect the “optimal” Bayesian decision.