scispace - formally typeset
Search or ask a question

Jury size and composition - a predictive approach

01 Jan 2007-
TL;DR: In this article, the authors consider two basic aspects of juries that must decide on guilt verdicts and their composition in situations where society consists of sub-populations, using a lower probability of a guilty verdict naturally provides a "benefit of doubt to the defendant" robustness of the inference.
Abstract: We consider two basic aspects of juries that must decide on guilt verdicts, namely the size of juries and their composition in situations where society consists of sub-populations. We refer to the actual jury that needs to provide a verdict as the ‘first jury’, and as their judgement should reflect that of society, we consider an imaginary ‘second jury’ to represent society. The focus is mostly on a lower probability of a guilty verdict by the second jury, conditional on a guilty verdict by the first jury, under suitable exchangeability assumptions between this second jury and the first jury. Using a lower probability of a guilty verdict naturally provides a ‘benefit of doubt to the defendant’ robustness of the inference. By use of a predictive approach, no assumptions on the guilt of a defendant are required, which distinguishes this approach from those presented before. The statistical inferences used in this paper are relatively straightforward, as only cases are considered where the lower probabilities according to Coolen’s Nonparametric Predictive Inference for Bernoulli random quantities [5] and Walley’s Imprecise Beta Model [24, 25] coincide.

Content maybe subject to copyright    Report

Citations
More filters
01 Jan 2001
TL;DR: The probability of any event is the ratio between the value at which an expectation depending on the happening of the event ought to be computed, and the value of the thing expected upon it’s 2 happening.
Abstract: Problem Given the number of times in which an unknown event has happened and failed: Required the chance that the probability of its happening in a single trial lies somewhere between any two degrees of probability that can be named. SECTION 1 Definition 1. Several events are inconsistent, when if one of them happens, none of the rest can. 2. Two events are contrary when one, or other of them must; and both together cannot happen. 3. An event is said to fail, when it cannot happen; or, which comes to the same thing, when its contrary has happened. 4. An event is said to be determined when it has either happened or failed. 5. The probability of any event is the ratio between the value at which an expectation depending on the happening of the event ought to be computed, and the value of the thing expected upon it’s 2 happening.

368 citations

Journal ArticleDOI
01 Dec 1984-Metrika
TL;DR: The theory of Bayesian inference at a rather sophisticated mathematical level is discussed in this paper, which is based on lectures given to students who already have had a course in measure-theoretic probability and has the rather clipped style of notes.
Abstract: This is a book about the theory of Bayesian inference at a rather sophisticated mathematical level. It is based on lectures given to students who already have had a course in measure-theoretic probability, and has the rather clipped style of notes. This led me to some difficulties of comprehension, especially when typographical errors occur, as in the definition of a random variable. Against this there is no unnecessary material and space for a few human touches. The development takes as fundamental the notion of expectation, though that word is scarcely used it does not appear in the inadequate index but has a brief mention on page 17. The book begins therefore with linear, non-negative, continuous operators and the treatment has the novelty that it does not require that the total probability be one: indeed, infinity is admitted, this having the advantage that improper distributions of the Jeffreys type can be included. There is an original and interesting account of marginal and conditional distributions with impropriety. For example, in discussing a uniform distribution over pairs (i,D of integers, the sets]=l and ]------2 both have infinite probability and cannot therefore be compared; so that conditional probabilities p( i=l / ]=l) , p(i=lff------2) require separate discussion. My own view is that this feature is not needed, for although improper distributions have some interest in low dimensions (and mainly in achieving an unnecessary match between Bayesian and Fisherian ideas) they fail in high dimensions, as Hartigan shows in chapter 9, where there is an admirable account of many normal means. A lesser objection is the complexity introduced by admitting impropriety: Bayes theorem takes 14 lines to state and 20 to prove. Chapter 5 is interestingly called \"Making Probabilities\" and discusses Jaynes' maximum entropy principle, Jeffreys' invariance, and similarity as ways of constructing distributions; those produced by the first two methods are typically improper. This attitude is continued into chapter 8 where exponential families are introduced as those minimizing information subject to constraints. There is a discussion of decision theory, as distinct from inference, but there is no attempt to consider utility: all is with respect to an undefined loss function. The consideration of the different types of admissibility is very brief and the opportunity to discuss the mathematically sensitive but practically meaningful aspects of this topic is lost. Other chapters are concerned with convergence, unbiasedness and confidence, multinomials, asymptotic normality, robustness and non-parametric procedures; the last being mainly devoted to a good account of the Dirichlet process. Before all this mathematics, the book begins with a brief account of the various theories of probability: logical, empirical and subjective. At the end of the account is a fascinating discussion of why the author thinks \"there is a probability 0.05 that there will be a large scale nuclear war between the U.S. and the U.S.S.R before 2000\". This connection between mathematics and reality is most warmly to be welcomed. The merit of this book lies in the novelty of the perspective presented. It is like looking at a courtyard from some unfamiliar window in an upper turret. Things look different from up there. Some corners of the courtyard are completely obscured. (It is suprising that there is no mention at all of the likelihood principle; and only an aside reference to likelihood.) Other matters are better appreciated because of the unfamiliar aspect normal means, for example. The book does not therefore present a balanced view of Bayesian theory but does provide an interesting and valuable account of many aspects of it and should command the attention of any statistical theorist.

85 citations

Journal ArticleDOI
TL;DR: Gastwirth et al. as mentioned in this paper present a survey of the state of the art in statistical science in the course of the COURTROOM, focusing on the following topics:
Abstract: STATISTICAL SCIENCE IN THE COURTROOM Edited by Joseph L Gastwirth. Springer, New York. 2000, Statistics for Social Science and Public Policy series, hardcover, 443 pp; US$59.95; ISBN 0 387 98997 8

13 citations

References
More filters
01 Jan 1996

50 citations


"Jury size and composition - a predi..." refers background in this paper

  • ...Imprecise Beta Model, lower probability, Nonparametric Predictive Inference, representation of sub-populations....

    [...]

  • ...However, one could argue that, ideally, a substantial majority of the population should (be expected to) agree with the guilty verdict, so perhaps the values 0.891 (for y = 10) or 0.953 (y = 9) are more natural to focus on....

    [...]

Journal ArticleDOI
TL;DR: In this article, a model was developed and then examined, employing data from Kalven and Zeisel [2] on the American legal system, and comparisons of the American experience with French data from the early 19th century were made through the use of the model.
Abstract: Jury size, majorities required for acquittal or conviction and correctness of juror decisions in criminal trials are studied. The article draws heavily on ideas presented in a previous article [1] which updated a model suggested by Poisson. Briefly, a model is developed and then examined, employing data from Kalven and Zeisel [2] on the American legal system. Comparisons of the American experience with French data from the early 19th century are made through the use of the model. Recent U. S. Supreme Court decisions in criminal trials regarding jury size and the relaxation of unanimity for decisions in criminal trials make these studies pertinent.

39 citations


"Jury size and composition - a predi..." refers background in this paper

  • ...Imprecise Beta Model, lower probability, Nonparametric Predictive Inference, representation of sub-populations....

    [...]

Journal ArticleDOI
TL;DR: In this article, lower and upper predictive probabilities from Coolen [1998] were used to compare future numbers of successes in Bernoulli trials for different groups, considering both pairwise and multiple comparisons.

39 citations

01 Sep 2004
TL;DR: A Practical Guide to Understanding, Assessing, and Implementing the Strategy that Yields Bottom-Line Success is given across several pages of tables as mentioned in this paper, which can be used for a management overview of the Six Sigma process.
Abstract: This book arrived when I had just begun participation on the team that was developing the business case for Six Sigma at BP Chemicals. Having a transAtlantic  ight that needed an activity, I read through this book. Previously the Ž rst author had written the Ž rst of the new wave of Six Sigma books (Breyfogle 1999). In the Technometrics report, Gardner (2000) noted that, despite the title, the book had minimal content that would support persons who were interested in learning how to assess the value of the Six Sigma process. The book concentrated on its subtitle, Smart Solutions Using Statistical Methods. It was primarily a book about statistical tools. Here the author is back with two coauthors and a book that Ž ts the title for the earlier book. This new and much smaller book, subtitled A Practical Guide to Understanding, Assessing, and Implementing the Strategy that Yields Bottom-Line Success, fulŽ lled very nicely my need for a management overview of the Six Sigma process. The book has four parts and 14 mostly short chapters. The longest is the Ž rst chapter, which deŽ nes Six Sigma and then compares Six Sigma to much of the quality process spectrum that has preceded it. Particularly useful were the answers to a series of questions that people frequently ask about Six Sigma. The second chapter gives a lot of history, including experiences of a number of companies that have implemented Six Sigma and an explanation of its statistical deŽ nition. The last chapter in this section tries to sell the need for Six Sigma based mostly on quality costs. This chapter did not do a very good job representing Six Sigma as a new and better process. There follow three chapters on Six Sigma metrics. The Ž rst chapter here, called “Numbers and Information,” has an interesting approach to the use of control charting for infrequent data to see the big picture. A following chapter, “Crafting Insightful Metrics,” devotes more space to the presentation on infrequent control charting. Licenses taken here with control charting will seem ill conceived to almost anyone with a background in control charts. For example, there are individuals charts for positive values that have negative lower control limits or individuals charts for data in which less than 20% are nonzero. Last in this section is a short chapter on performance metrics. The next group of four chapters falls under the heading “Six Sigma Business Strategy.” By this point the authors are totally preaching to the choir. There is essentially only one way for an organization to carry out the Six Sigma process, which is their way. Essentially, this is for the company to act like GE. That particular prescription is a huge pill for most companies to swallow. The chapter on deployment options makes an effective case for deployment through projects, not through training, and it argues that consultants are essential. The chapter on creating a successful Six Sigma infrastructure argues for the GE way. Similar content is found in the next chapter on training, which argues for external consultant trainers. It never mentions sending people to outside training. In this chapter Minitab is declared the winner in the Six Sigma statistical software derby. The last chapter in the set, “Project Selection, Sizing and Other Techniques,” is one of the best chapters, though the specter of doing things because of Six Sigma, rather than pursuing logical needs for the business, lurks behind these processes. Despite intentions otherwise, the authors have difŽ culty separating their material from the manufacturing-for-customers environment. The last part of the book, “Applying Six Sigma,” presents three chapters with applications having a speciŽ c focus—manufacturing applications, service/transactional applications, and development applications. The latter have also assumed the label “Design for Six Sigma” in the provider marketplace. For each type of examples, a 21-step process for using Six Sigma tools is given across several pages of tables. There is a Ž nal chapter on creativity and innovation that is a nice enhancement to the overall methodology. The book has a nice glossary and an excellent reference list. This book is very much an extension of Breyfogle (1999). There is considerable repetition and a lot of additional references. Its strength is its description and illustration of the many things that are important and necessary in the Six Sigma process. Its weakness is its adherence to the GE model and its impression that one should not bother if there is no Jack Welch mandate for the business to pursue the process. For more on the GE experiences, see the excellent new book by Pande, Neuman, and Cavanagh (2000), or Ž nd the very popular book by Harry and Schroeder (2000) that is available in any bookstore. Comparatively, the Pande et al. book is a more straightforward presentation of Six Sigma implementation details. Its perspective is clear, and it is devoid of the hype of the other reference. It will help any business get Six Sigma off the ground in a way that is appropriate. For me Pande et al. (2000) is the best of all of the Six Sigma books because it promotes starting with a pilot effort, the direction that has been chosen for BP Chemicals.

33 citations

Journal ArticleDOI
TL;DR: This article examines in more detail the internal and external behavior of a jury and explores questions regarding the correctness of jury decisions and the effect of changes in jury size and in majority requirements for decisions.
Abstract: Extending earlier work by the authors [3, 4], this article examines in more detail the internal and external behavior of a jury. The internal analysis sheds light on the decision-making process from the first ballot position to the final outcome. Data in Kalven and Zeisel [6] have been employed to test the tenability of the models. The external analysis explores questions regarding the correctness of jury decisions and the effect of changes in jury size and in majority requirements for decisions.

32 citations