scispace - formally typeset
Journal ArticleDOI

Inferences from Multinomial Data: Learning About a Bag of Marbles

Reads0
Chats0
TLDR
In this article, the imprecise Dirichlet model is proposed for multinomial data in cases where there is no prior information and the probabilities are expressed in terms of posterior upper and lower probabilities.
Abstract
A new method is proposed for making inferences from multinomial data in cases where there is no prior information. A paradigm is the problem of predicting the colour of the next marble to be drawn from a bag whose contents are (initially) completely unknown. In such problems we may be unable to formulate a sample space because we do not know what outcomes are possible. This suggests an invariance principle : inferences based on observations should not depend on the sample space in which the observations and future events of interest are represented. Objective Bayesian methods do not satisfy this principle. This paper describes a statistical model, called the imprecise Dirichlet model, for drawing coherent inferences from multinomial data. Inferences are expressed in terms of posterior upper and lower probabilities. The probabilities are initially vacuous, reflecting prior ignorance, but they become more precise as the number of observations increases. This model does satisfy the invariance principle. Two sets of data are analysed in detail. In the first example one red marble is observed in six drawings from a bag. Inferences from the imprecise Dirichlet model are compared with objective Bayesian and frequentist inferences. The second example is an analysis of data from medical trials which compared two treatments for cardiorespiratory failure in newborn babies. There are two problems : to draw conclusions about which treatment is more effective and to decide when the randomized trials should be terminated. This example shows how the imprecise Dirichlet model can be used to analyse data in the form of a contingency table.

read more

Citations
More filters
Book ChapterDOI

Evidential Object Recognition Based on Information Gain Maximization

TL;DR: This paper uses belief functions to make the reliability of the evidence provided by the training data an explicit part of the recognition model, and investigates the effect of the amount of training data on classification performance by comparing different methods for constructing belief functions from data.
Journal ArticleDOI

Comparison of estimators for measures of linkage disequilibrium.

TL;DR: It is confirmed that volume estimators have better expected mean square error than the naive plug-in estimators and are outperformed by estimators plugging-in easy to calculate non-informative Bayesian probability estimates into the theoretical formulae for the measures.
Journal ArticleDOI

Minimum distance estimation in imprecise probability models

TL;DR: In this paper, the authors considered estimating a parameter θ in an imprecise probability model (P ¯ θ ) θ ∈ Θ, which consists of coherent upper previsions P¯ θ which are given by finite numbers of constraints on expectations.
Posted Content

A Gibbs sampler for a class of random convex polytopes

TL;DR: In this article, a Gibbs sampler for the Dempster-Shafer (DS) approach to statistical inference for categorical distributions is presented, which relies on an equivalence between the iterative constraints of the vertex configuration and the non-negativity of cycles in a fully connected directed graph.
Journal ArticleDOI

Unifying parameter learning and modelling complex systems with epistemic uncertainty using probability interval

TL;DR: The combination of dynamical Bayesian networks and of imprecise probabilities to solve the challenging task of modeling complex dynamical systems from heterogeneous pieces of knowledge varying in precision and reliability is proposed.
References
More filters
Journal ArticleDOI

Bootstrap Methods: Another Look at the Jackknife

TL;DR: In this article, the authors discuss the problem of estimating the sampling distribution of a pre-specified random variable R(X, F) on the basis of the observed data x.
Book

Theory of probability

TL;DR: In this paper, the authors introduce the concept of direct probabilities, approximate methods and simplifications, and significant importance tests for various complications, including one new parameter, and various complications for frequency definitions and direct methods.
Journal ArticleDOI

A Bayesian Analysis of Some Nonparametric Problems

TL;DR: In this article, a class of prior distributions, called Dirichlet process priors, is proposed for nonparametric problems, for which treatment of many non-parametric statistical problems may be carried out, yielding results that are comparable to the classical theory.
Book

Bayesian inference in statistical analysis

TL;DR: In this article, the effect of non-normality on inference about a population mean with generalizations was investigated. But the authors focused on the effect on the mean with information from more than one source.