scispace - formally typeset
Search or ask a question

Inferences from multinomial data: Learning about a bag of marbles - Discussion

About: The article was published on 1996-01-01 and is currently open access. It has received 50 citations till now. The article focuses on the topics: Multinomial distribution.
Citations
More filters
Journal ArticleDOI
TL;DR: Bayesian methods for categorical data analysis, with primary emphasis on contingency table analysis, is surveyed, with main emphasis on generalized linear models such as logistic regression for binary and multi-category response variables.
Abstract: This article surveys Bayesian methods for categorical data analysis, with primary emphasis on contingency table analysis. Early innovations were proposed by Good (1953, 1956, 1965) for smoothing proportions in contingency tables and by Lindley (1964) for inference about odds ratios. These approaches primarily used conjugate beta and Dirichlet priors. Altham (1969, 1971) presented Bayesian analogs of small-sample frequentist tests for 2 x 2 tables using such priors. An alternative approach using normal priors for logits received considerable attention in the 1970s by Leonard and others (e.g., Leonard 1972). Adopted usually in a hierarchical form, the logit-normal approach allows greater flexibility and scope for generalization. The 1970s also saw considerable interest in loglinear modeling. The advent of modern computational methods since the mid-1980s has led to a growing literature on fully Bayesian analyses with models for categorical data, with main emphasis on generalized linear models such as logistic regression for binary and multi-category response variables.

158 citations


Cites background from "Inferences from multinomial data: L..."

  • ...See also Walley (1996) for discussion of a related “imprecise Dirichlet model” for multinomial data....

    [...]

Journal ArticleDOI
TL;DR: The foundational issues addressed reflect on the position that "probability is perfect" and take into open consideration the need for an extended framework for risk assessment that reflects the separation that practically exists between analyst and decisionmaker.
Abstract: In the analysis of the risk associated to rare events that may lead to catastrophic consequences with large uncertainty, it is questionable that the knowledge and information available for the analysis can be reflected properly by probabilities. Approaches other than purely probabilistic have been suggested, for example, using interval probabilities, possibilistic measures, or qualitative methods. In this article, we look into the problem and identify a number of issues that are foundational for its treatment. The foundational issues addressed reflect on the position that "probability is perfect" and take into open consideration the need for an extended framework for risk assessment that reflects the separation that practically exists between analyst and decisionmaker.

140 citations


Cites background from "Inferences from multinomial data: L..."

  • ...7 many open questions related to the foundations of these approaches and their use in both inference as well as risk and uncertainty decision-making; see for example the discussions in [44, 45, 46, 47, 48, 49, 50, 51]....

    [...]

Journal ArticleDOI
TL;DR: An overview of recently developed theory and methods for nonparametric predictive inference (NPI), which is based on A(n) and uses interval probability to quantify uncertainty, and a discussion of NPI and objective Bayesianism.
Abstract: This paper consists of three main parts. First, we give an introduction to Hill's assumption A (n) and to theory of interval probability, and an overview of recently developed theory and methods for nonparametric predictive inference (NPI), which is based on A (n) and uses interval probability to quantify uncertainty. Thereafter, we illustrate NPI by introducing a variation to the assumption A (n), suitable for inference based on circular data, with applications to several data sets from the literature. This includes attention to comparison of two groups of circular data, and to grouped data. We briefly discuss such inference for multiple future observations. We end the paper with a discussion of NPI and objective Bayesianism.

110 citations

Journal ArticleDOI
TL;DR: An approximate algorithm is presented to obtain a posteriori intervals of probability, when available information is also given with intervals, using probability trees as a means of representing and computing with the convex sets of probabilities associated to the intervals.

64 citations

Journal ArticleDOI
TL;DR: This paper develops NPI for multinomial data when the total number of possible categories for the data is known and presents the upper and lower probabilities for events involving the next observation and several of their properties.

57 citations


Cites methods from "Inferences from multinomial data: L..."

  • ...We also comment on differences between this NPI approach and corresponding inferences based on Walley’s Imprecise Dirichlet Model....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: Bayesian methods for categorical data analysis, with primary emphasis on contingency table analysis, is surveyed, with main emphasis on generalized linear models such as logistic regression for binary and multi-category response variables.
Abstract: This article surveys Bayesian methods for categorical data analysis, with primary emphasis on contingency table analysis. Early innovations were proposed by Good (1953, 1956, 1965) for smoothing proportions in contingency tables and by Lindley (1964) for inference about odds ratios. These approaches primarily used conjugate beta and Dirichlet priors. Altham (1969, 1971) presented Bayesian analogs of small-sample frequentist tests for 2 x 2 tables using such priors. An alternative approach using normal priors for logits received considerable attention in the 1970s by Leonard and others (e.g., Leonard 1972). Adopted usually in a hierarchical form, the logit-normal approach allows greater flexibility and scope for generalization. The 1970s also saw considerable interest in loglinear modeling. The advent of modern computational methods since the mid-1980s has led to a growing literature on fully Bayesian analyses with models for categorical data, with main emphasis on generalized linear models such as logistic regression for binary and multi-category response variables.

158 citations

Journal ArticleDOI
TL;DR: The foundational issues addressed reflect on the position that "probability is perfect" and take into open consideration the need for an extended framework for risk assessment that reflects the separation that practically exists between analyst and decisionmaker.
Abstract: In the analysis of the risk associated to rare events that may lead to catastrophic consequences with large uncertainty, it is questionable that the knowledge and information available for the analysis can be reflected properly by probabilities. Approaches other than purely probabilistic have been suggested, for example, using interval probabilities, possibilistic measures, or qualitative methods. In this article, we look into the problem and identify a number of issues that are foundational for its treatment. The foundational issues addressed reflect on the position that "probability is perfect" and take into open consideration the need for an extended framework for risk assessment that reflects the separation that practically exists between analyst and decisionmaker.

140 citations

Journal ArticleDOI
TL;DR: An overview of recently developed theory and methods for nonparametric predictive inference (NPI), which is based on A(n) and uses interval probability to quantify uncertainty, and a discussion of NPI and objective Bayesianism.
Abstract: This paper consists of three main parts. First, we give an introduction to Hill's assumption A (n) and to theory of interval probability, and an overview of recently developed theory and methods for nonparametric predictive inference (NPI), which is based on A (n) and uses interval probability to quantify uncertainty. Thereafter, we illustrate NPI by introducing a variation to the assumption A (n), suitable for inference based on circular data, with applications to several data sets from the literature. This includes attention to comparison of two groups of circular data, and to grouped data. We briefly discuss such inference for multiple future observations. We end the paper with a discussion of NPI and objective Bayesianism.

110 citations

Journal ArticleDOI
TL;DR: An approximate algorithm is presented to obtain a posteriori intervals of probability, when available information is also given with intervals, using probability trees as a means of representing and computing with the convex sets of probabilities associated to the intervals.

64 citations

Journal ArticleDOI
TL;DR: This paper develops NPI for multinomial data when the total number of possible categories for the data is known and presents the upper and lower probabilities for events involving the next observation and several of their properties.

57 citations