scispace - formally typeset
Search or ask a question

Showing papers by "James O. Berger published in 2021"


Journal ArticleDOI
TL;DR: The median probability model (MPM) as mentioned in this paper is defined as the model consisting of those variables whose marginal posterior probability of inclusion is at least 0.5, which is the best single model for prediction in orthogonal and nested correlated designs.
Abstract: The median probability model (MPM) (Barbieri and Berger, 2004) is defined as the model consisting of those variables whose marginal posterior probability of inclusion is at least 0.5 . The MPM rule yields the best single model for prediction in orthogonal and nested correlated designs. This result was originally conceived under a specific class of priors, such as the point mass mixtures of non-informative and g -type priors. The MPM rule, however, has become so very popular that it is now being deployed for a wider variety of priors and under correlated designs, where the properties of MPM are not yet completely understood. The main thrust of this work is to shed light on properties of MPM in these contexts by (a) characterizing situations when MPM is still safe under correlated designs, (b) providing significant generalizations of MPM to a broader class of priors (such as continuous spike-and-slab priors). We also provide new supporting evidence for the suitability of g -priors, as opposed to independent product priors, using new predictive matching arguments. Furthermore, we emphasize the importance of prior model probabilities and highlight the merits of non-uniform prior probability assignments using the notion of model aggregates.

14 citations


Journal ArticleDOI
01 Mar 2021-Test
TL;DR: It is shown that information inconsistency is a widespread problem using standard priors while certain theoretically recommended priors, including scale mixtures of conjugate priors and adaptivepriors, are information consistent.
Abstract: Informally, ‘information inconsistency’ is the property that has been observed in some Bayesian hypothesis testing and model selection scenarios whereby the Bayesian conclusion does not become definitive when the data seem to become definitive. An example is that, when performing a t test using standard conjugate priors, the Bayes factor of the alternative hypothesis to the null hypothesis remains bounded as the t statistic grows to infinity. The goal of this paper is to thoroughly investigate information inconsistency in various Bayesian testing problems. We consider precise hypothesis tests, one-sided hypothesis tests, and multiple hypothesis tests under normal linear models with dependent observations. Standard priors are considered, such as conjugate and semi-conjugate priors, as well as variations of Zellner’s g prior (e.g., fixed g priors, mixtures of g priors, and adaptive (data-based) g priors). It is shown that information inconsistency is a widespread problem using standard priors while certain theoretically recommended priors, including scale mixtures of conjugate priors and adaptive priors, are information consistent.

5 citations


Journal ArticleDOI
TL;DR: The Bayesian approach is shown to have excellent frequentist properties and is argued to be the most effective way of obtaining frequentist multiplicity control, without sacrificing power, when there is considerable test statistic dependence.
Abstract: The problem of testing mutually exclusive hypotheses with dependent test statistics is considered. Bayesian and frequentist approaches to multiplicity control are studied and compared to help gain understanding as to the effect of test statistic dependence on each approach. The Bayesian approach is shown to have excellent frequentist properties and is argued to be the most effective way of obtaining frequentist multiplicity control, without sacrificing power, when there is considerable test statistic dependence.

3 citations


OtherDOI
19 Feb 2021

1 citations


Journal Article
01 Jan 2021-Test
TL;DR: In this paper, the authors show that information inconsistency is ubiquitous in Bayesian hypothesis testing under conjugate priors, and they also show that theoretically recommended priors are information consistent.
Abstract: Informally, "Information Inconsistency" is the property that has been observed in many Bayesian hypothesis testing and model selection procedures whereby the Bayesian conclusion does not become definitive when the data seems to become definitive. An example is that, when performing a t-test using standard conjugate priors, the Bayes factor of the alternative hypothesis to the null hypothesis remains bounded as the t statistic grows to infinity. This paper shows that information inconsistency is ubiquitous in Bayesian hypothesis testing under conjugate priors. Yet the title does not fully describe the paper, since we also show that theoretically recommended priors, including scale mixtures of conjugate priors and adaptive priors, are information consistent. Hence the paper is simply a forceful warning that use of conjugate priors in testing and model selection is highly problematical, and should be replaced by the information consistent alternatives.

1 citations