Jury size and composition - a predictive approach
01 Jan 2007-
TL;DR: In this article, the authors consider two basic aspects of juries that must decide on guilt verdicts and their composition in situations where society consists of sub-populations, using a lower probability of a guilty verdict naturally provides a "benefit of doubt to the defendant" robustness of the inference.
Abstract: We consider two basic aspects of juries that must decide on guilt verdicts, namely the size of juries and their composition in situations where society consists of sub-populations. We refer to the actual jury that needs to provide a verdict as the ‘first jury’, and as their judgement should reflect that of society, we consider an imaginary ‘second jury’ to represent society. The focus is mostly on a lower probability of a guilty verdict by the second jury, conditional on a guilty verdict by the first jury, under suitable exchangeability assumptions between this second jury and the first jury. Using a lower probability of a guilty verdict naturally provides a ‘benefit of doubt to the defendant’ robustness of the inference. By use of a predictive approach, no assumptions on the guilt of a defendant are required, which distinguishes this approach from those presented before. The statistical inferences used in this paper are relatively straightforward, as only cases are considered where the lower probabilities according to Coolen’s Nonparametric Predictive Inference for Bernoulli random quantities [5] and Walley’s Imprecise Beta Model [24, 25] coincide.
Content maybe subject to copyright Report
Citations
More filters
[...]
01 Jan 2001
TL;DR: The probability of any event is the ratio between the value at which an expectation depending on the happening of the event ought to be computed, and the value of the thing expected upon it’s 2 happening.
Abstract: Problem Given the number of times in which an unknown event has happened and failed: Required the chance that the probability of its happening in a single trial lies somewhere between any two degrees of probability that can be named. SECTION 1 Definition 1. Several events are inconsistent, when if one of them happens, none of the rest can. 2. Two events are contrary when one, or other of them must; and both together cannot happen. 3. An event is said to fail, when it cannot happen; or, which comes to the same thing, when its contrary has happened. 4. An event is said to be determined when it has either happened or failed. 5. The probability of any event is the ratio between the value at which an expectation depending on the happening of the event ought to be computed, and the value of the thing expected upon it’s 2 happening.
217 citations
[...]
TL;DR: The theory of Bayesian inference at a rather sophisticated mathematical level is discussed in this paper, which is based on lectures given to students who already have had a course in measure-theoretic probability and has the rather clipped style of notes.
Abstract: This is a book about the theory of Bayesian inference at a rather sophisticated mathematical level. It is based on lectures given to students who already have had a course in measure-theoretic probability, and has the rather clipped style of notes. This led me to some difficulties of comprehension, especially when typographical errors occur, as in the definition of a random variable. Against this there is no unnecessary material and space for a few human touches. The development takes as fundamental the notion of expectation, though that word is scarcely used it does not appear in the inadequate index but has a brief mention on page 17. The book begins therefore with linear, non-negative, continuous operators and the treatment has the novelty that it does not require that the total probability be one: indeed, infinity is admitted, this having the advantage that improper distributions of the Jeffreys type can be included. There is an original and interesting account of marginal and conditional distributions with impropriety. For example, in discussing a uniform distribution over pairs (i,D of integers, the sets]=l and ]------2 both have infinite probability and cannot therefore be compared; so that conditional probabilities p( i=l / ]=l) , p(i=lff------2) require separate discussion. My own view is that this feature is not needed, for although improper distributions have some interest in low dimensions (and mainly in achieving an unnecessary match between Bayesian and Fisherian ideas) they fail in high dimensions, as Hartigan shows in chapter 9, where there is an admirable account of many normal means. A lesser objection is the complexity introduced by admitting impropriety: Bayes theorem takes 14 lines to state and 20 to prove. Chapter 5 is interestingly called \"Making Probabilities\" and discusses Jaynes' maximum entropy principle, Jeffreys' invariance, and similarity as ways of constructing distributions; those produced by the first two methods are typically improper. This attitude is continued into chapter 8 where exponential families are introduced as those minimizing information subject to constraints. There is a discussion of decision theory, as distinct from inference, but there is no attempt to consider utility: all is with respect to an undefined loss function. The consideration of the different types of admissibility is very brief and the opportunity to discuss the mathematically sensitive but practically meaningful aspects of this topic is lost. Other chapters are concerned with convergence, unbiasedness and confidence, multinomials, asymptotic normality, robustness and non-parametric procedures; the last being mainly devoted to a good account of the Dirichlet process. Before all this mathematics, the book begins with a brief account of the various theories of probability: logical, empirical and subjective. At the end of the account is a fascinating discussion of why the author thinks \"there is a probability 0.05 that there will be a large scale nuclear war between the U.S. and the U.S.S.R before 2000\". This connection between mathematics and reality is most warmly to be welcomed. The merit of this book lies in the novelty of the perspective presented. It is like looking at a courtyard from some unfamiliar window in an upper turret. Things look different from up there. Some corners of the courtyard are completely obscured. (It is suprising that there is no mention at all of the likelihood principle; and only an aside reference to likelihood.) Other matters are better appreciated because of the unfamiliar aspect normal means, for example. The book does not therefore present a balanced view of Bayesian theory but does provide an interesting and valuable account of many aspects of it and should command the attention of any statistical theorist.
79 citations
[...]
TL;DR: Gastwirth et al. as mentioned in this paper present a survey of the state of the art in statistical science in the course of the COURTROOM, focusing on the following topics:
Abstract: STATISTICAL SCIENCE IN THE COURTROOM Edited by Joseph L Gastwirth. Springer, New York. 2000, Statistics for Social Science and Public Policy series, hardcover, 443 pp; US$59.95; ISBN 0 387 98997 8
13 citations
References
More filters
Book•
[...]
01 Jan 1939
TL;DR: In this paper, the authors introduce the concept of direct probabilities, approximate methods and simplifications, and significant importance tests for various complications, including one new parameter, and various complications for frequency definitions and direct methods.
Abstract: 1. Fundamental notions 2. Direct probabilities 3. Estimation problems 4. Approximate methods and simplifications 5. Significance tests: one new parameter 6. Significance tests: various complications 7. Frequency definitions and direct methods 8. General questions
7,074 citations
[...]
TL;DR: In this article, the imprecise Dirichlet model is proposed for multinomial data in cases where there is no prior information and the probabilities are expressed in terms of posterior upper and lower probabilities.
Abstract: A new method is proposed for making inferences from multinomial data in cases where there is no prior information. A paradigm is the problem of predicting the colour of the next marble to be drawn from a bag whose contents are (initially) completely unknown. In such problems we may be unable to formulate a sample space because we do not know what outcomes are possible. This suggests an invariance principle : inferences based on observations should not depend on the sample space in which the observations and future events of interest are represented. Objective Bayesian methods do not satisfy this principle. This paper describes a statistical model, called the imprecise Dirichlet model, for drawing coherent inferences from multinomial data. Inferences are expressed in terms of posterior upper and lower probabilities. The probabilities are initially vacuous, reflecting prior ignorance, but they become more precise as the number of observations increases. This model does satisfy the invariance principle. Two sets of data are analysed in detail. In the first example one red marble is observed in six drawings from a bag. Inferences from the imprecise Dirichlet model are compared with objective Bayesian and frequentist inferences. The second example is an analysis of data from medical trials which compared two treatments for cardiorespiratory failure in newborn babies. There are two problems : to draw conclusions about which treatment is more effective and to decide when the randomized trials should be terminated. This example shows how the imprecise Dirichlet model can be used to analyse data in the form of a contingency table.
482 citations