scispace - formally typeset
Search or ask a question
Journal Article•DOI•

Murder and (of?) the likelihood principle: A Trialogue

TL;DR: The Likelihood Principle of Bayesian inference asserts that only likelihoods matter to single-stage inference; this has unfortunate implications; it does not permit the inputs to Bayesian arithmetic at all levels to be likelihood ratios.
Abstract: The Likelihood Principle of Bayesian inference asserts that only likelihoods matter to single-stage inference. A likelihood is the probability of evidence given a hypothesis multiplied by a positive constant. The constant cancels out of simple versions of Bayes's Theorem, and so is irrelevant to single-stage inferences. Most non-statistical inferences require a multi-stage path from evidence to hypotheses; testimony that an event occurred does not guarantee that in fact it did. Hierarchical Bayesian models explicate such cases. For such models, the Likelihood Principle applies to a collection of data elements treated as a single datum conditionally independent of other similar collections. It does not necessarily apply to a single data element taken alone. This has unfortunate implications; in particular, it does not permit the inputs to Bayesian arithmetic at all levels to be likelihood ratios. These issues are sorted out in the context of a trial in which one author is accused of murdering another, with the third as a key witness.
Citations
More filters
Journal Article•DOI•
TL;DR: It is suggested that the impact of source reliability attributes may be more complex than portrayed in the auditing standards and that recognizing these subtleties may lead to greater efficiency and effectiveness.
Abstract: This paper provides a normative framework for how external auditors should evaluate internal audit (IA) work, with a view to assessing the risk of material misstatement. The central issue facing the external auditor when evaluating IA work is the reliability of IA work. Reliability assessments are structured using the cascaded inference framework from behavioral decision theory, in which attributes of source reliability are explicitly modeled and combined using Bayes' rule in order to determine the inferential value of IA work. Results suggest that the inferential value of an IA report is highly sensitive to internal auditor reporting bias, but relatively insensitive to reporting veracity. Veracity refers to internal auditors' propensity to report truthfully, whereas bias refers to the propensity to misreport findings. Results also indicate that this sensitivity to reporting bias is conditional on the level of internal auditor competence, thus suggesting significant interaction effects between the objectivity and competence factors. Collectively, these findings suggest that the impact of source reliability attributes may be more complex than portrayed in the auditing standards and that recognizing these subtleties may lead to greater efficiency and effectiveness.

25 citations

Journal Article•DOI•
TL;DR: It is proposed that statistical tests work in the same way--in that they are based on examples, invoke the analogy of a model and use the size of the effect under test as a sign that the chance hypothesis is unlikely.
Abstract: Formal logic operates in a closed system where all the information relevant to any conclusion is present, whereas this is not the case when one reasons about events and states of the world. Pollard and Richardson drew attention to the fact that the reasoning behind statistical tests does not lead to logically justifiable conclusions. In this paper statistical inferences are defended not by logic but by the standards of everyday reasoning. Aristotle invented formal logic, but argued that people mostly get at the truth with the aid of enthymemes--incomplete syllogisms which include arguing from examples, analogies and signs. It is proposed that statistical tests work in the same way--in that they are based on examples, invoke the analogy of a model and use the size of the effect under test as a sign that the chance hypothesis is unlikely. Of existing theories of statistical inference only a weak version of Fisher's takes this into account. Aristotle anticipated Fisher by producing an argument of the form that there were too many cases in which an outcome went in a particular direction for that direction to be plausibly attributed to chance. We can therefore conclude that Aristotle would have approved of statistical inference and there is a good reason for calling this form of statistical inference classical.

10 citations


Cites background from "Murder and (of?) the likelihood pri..."

  • ...Pollard and Richardson (1987) made it clear that they were not attacking the use of statistical tests but the way they had been characterized....

    [...]

Journal Article•DOI•
TL;DR: This paper explores the implications of research results in behavioural decision theory on knowledge engineering and an approach to knowledge engineering is proposed that takes into account these implications.
Abstract: This paper explores the implications of research results in behavioural decision theory on knowledge engineering. Behavioural decision theory, with its performance (versus process) orientation, can tell us a great deal about the validity of human expert knowledge, and when it should be modelled. A brief history of behavioural decision theory is provided. Implications for knowledge elicitation and representation are discussed. An approach to knowledge engineering is proposed that takes into account these implications.

10 citations

01 Jan 1996
TL;DR: This paper aims to provide real-time information about how the response of the immune system to natural disasters is influenced by environmental factors such as infectious disease and infectious disease.
Abstract: Shenoy for their insightful comments on an earlier version of this paper.

8 citations

Book Chapter•DOI•
David A. Schum1•
01 Jan 1999
TL;DR: As research in what has been called the science of complexity has brought together persons from many disciplines who, in the past, might not have been so congenial to the thought of collaborating, more elements of this process and the manner in which they appear to interact in producing the phenomenon of interest are recognized.
Abstract: Stimulated by curiosity as well as by necessity, we undertake studies of various phenomena and the processes that seem to produce them. Our initial explanations of the process by which some phenomenon is produced may often be oversimplified or possibly entirely mistaken. But, as our research continues we begin to recognize more elements of this process and the manner in which they appear to interact in producing the phenomenon of interest. In other words, as discovery lurches forward we begin to capture more of what we might regard as complexities or subtleties involving these elements and their interactions. In the last decade or so there has been growing interest in the study of complexity itself. Research in what has been called the science of complexity [Waldrop, 1992, 9; Casti,1994, 269-274] has brought together persons from many disciplines who, in the past, might not have been so congenial to the thought of collaborating. At present there are various accounts of what complexity means and how it emerges. In some studies it is observed that simple processes can produce complex phenomena; in others, it is observed that what we often regard as simple phenomena are the result of complex processes. But these observations are not new by any means. Years ago Poincare observed [1905, 147]:

7 citations

References
More filters
Journal Article•DOI•
TL;DR: The likelihood principle emphasized in Bayesian statistics implies that the rules governing when data collection stops are irrelevant to data interpretation, and it is entirely appropriate to collect data until a point has been proven or disproven.
Abstract: Bayesian statistics, a currently controversial viewpoint concerning statistical inference, is based on a definition of probability as a particular measure of the opinions of ideally consistent people. Statistical inference is modification of these opinions in the light of evidence, and Bayes’ theorem specifies how such modifications should be made. The tools of Bayesian statistics include the theory of specific distributions and the principle of stable estimation, which specifies when actual prior opinions may be satisfactorily approximated by a uniform distribution. A common feature of many classical significance tests is that a sharp null hypothesis is compared with a diffuse alternative hypothesis. Often evidence which, for a Bayesian statistician, strikingly supports the null hypothesis leads to rejection of that hypothesis by standard classical procedures. The likelihood principle emphasized in Bayesian statistics implies, among other things, that the rules governing when data collection stops are irrelevant to data interpretation. It is entirely appropriate to collect data until a point has been proven or disproven, or until the data collector runs out of time, money, or patience.

1,387 citations

Journal Article•DOI•
Ross D. Shachter1•
TL;DR: An algorithm is developed that can evaluate any well-formed influence diagram and determine the optimal policy for its decisions and can be performed using the decision maker's perspective on the problem.
Abstract: An influence diagram is a graphical structure for modeling uncertain variables and decisions and explicitly revealing probabilistic dependence and the flow of information. It is an intuitive framework in which to formulate problems as perceived by decision makers and to incorporate the knowledge of experts. At the same time, it is a precise description of information that can be stored and manipulated by a computer. We develop an algorithm that can evaluate any well-formed influence diagram and determine the optimal policy for its decisions. Since the diagram can be analyzed directly, there is no need to construct other representations such as a decision tree. As a result, the analysis can be performed using the decision maker's perspective on the problem. Questions of sensitivity and the value of information are natural and easily posed. Modifications to the model suggested by such analyses can be made directly to the problem formulation, and then evaluated directly.

1,343 citations

Journal Article•DOI•
TL;DR: A man-computer system for probabilistic processing of fallible military information is discussed in some detail as an application of these ideas and as a setting and motivator for future research on human information processing and decision making.
Abstract: The development of a dynamic decision theory will be central to the impending rapid expansion of research on human decision processes. Of a taxonomy of six decision problems, five require a dynamic theory in which the decision maker is assumed to make a sequence of decisions, basing decision n + 1 on what he learned from decision n and its consequences. Kesearch in progress on information seeking, intuitive statistics, sequential prediction, and Bayesian information processing is reviewed to illustrate the kind of work needed. The relevance of mathematical developments in dynamic programming and Bayesian statistics to dynamic decision theory is examined. A man-computer system for probabilistic processing of fallible military information is discussed in some detail as an application of these ideas and as a setting and motivator for future research on human information processing and decision making.

365 citations

Journal Article•DOI•
TL;DR: A Probabilistic Information Processing System uses men and machines in a novel way to perform diagnostic information processing that circumvents human conservatism in information processing and fragments the job of evaluating diagnostic information into small separable tasks.
Abstract: A Probabilistic Information Processing System (PIP) uses men and machines in a novel way to perform diagnostic information processing. Men estimate likelihood ratios for each datum and each pair of hypotheses under consideration or a sufficient subset of these pairs. A computer aggregates these estimates by means of Bayes' theorem of probability theory into a posterior distribution that reflects the impact of all available data on all hypotheses being considered. Such a system circumvents human conservatism in information processing, the inability of men to aggregate information in such a way as to modify their opinions as much as the available data justify. It also fragments the job of evaluating diagnostic information into small separable tasks. The posterior distributions that are a PIP's output may be used as a guide to human decision making or may be combined with a payoff matrix to make decisions by means of the principle of maximizing expected value. A large simulation-type experiment compared a PIP with three other information processing systems in a simulated strategic war setting of the 1970's. The difference between PIP and its competitors was that in PIP the information was aggregated by computer, while in the other three systems, the operators aggregated the information in their heads. PIP processed the information dramatically more efficiently than did any competitor. Data that would lead PIP to give 99:1 odds in favor of a hypothesis led the next best system to give 4?: 1 odds.

144 citations

Journal Article•DOI•
TL;DR: In this article, the adjusted likelihood ratio (A) is defined which incorporates information about source reliability and information about the inferential impact (L) of the event being reported, and it is shown that when source reliability is contingent upon some hypothesis, then A can be greater than L for the event.

66 citations

Trending Questions (1)
What is likelihood?

The paper defines likelihood as the probability of evidence given a hypothesis multiplied by a positive constant. The constant cancels out in simple versions of Bayes's Theorem and is irrelevant to single-stage inferences.