scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Bayes theory: Hartigan, Springer-Verlag, New York 1983, p. 145, DM 46,-

01 Dec 1984-Metrika (Springer Science and Business Media LLC)-Vol. 31, Iss: 1, pp 214-214
TL;DR: The theory of Bayesian inference at a rather sophisticated mathematical level is discussed in this paper, which is based on lectures given to students who already have had a course in measure-theoretic probability and has the rather clipped style of notes.
Abstract: This is a book about the theory of Bayesian inference at a rather sophisticated mathematical level. It is based on lectures given to students who already have had a course in measure-theoretic probability, and has the rather clipped style of notes. This led me to some difficulties of comprehension, especially when typographical errors occur, as in the definition of a random variable. Against this there is no unnecessary material and space for a few human touches. The development takes as fundamental the notion of expectation, though that word is scarcely used it does not appear in the inadequate index but has a brief mention on page 17. The book begins therefore with linear, non-negative, continuous operators and the treatment has the novelty that it does not require that the total probability be one: indeed, infinity is admitted, this having the advantage that improper distributions of the Jeffreys type can be included. There is an original and interesting account of marginal and conditional distributions with impropriety. For example, in discussing a uniform distribution over pairs (i,D of integers, the sets]=l and ]------2 both have infinite probability and cannot therefore be compared; so that conditional probabilities p( i=l / ]=l) , p(i=lff------2) require separate discussion. My own view is that this feature is not needed, for although improper distributions have some interest in low dimensions (and mainly in achieving an unnecessary match between Bayesian and Fisherian ideas) they fail in high dimensions, as Hartigan shows in chapter 9, where there is an admirable account of many normal means. A lesser objection is the complexity introduced by admitting impropriety: Bayes theorem takes 14 lines to state and 20 to prove. Chapter 5 is interestingly called \"Making Probabilities\" and discusses Jaynes' maximum entropy principle, Jeffreys' invariance, and similarity as ways of constructing distributions; those produced by the first two methods are typically improper. This attitude is continued into chapter 8 where exponential families are introduced as those minimizing information subject to constraints. There is a discussion of decision theory, as distinct from inference, but there is no attempt to consider utility: all is with respect to an undefined loss function. The consideration of the different types of admissibility is very brief and the opportunity to discuss the mathematically sensitive but practically meaningful aspects of this topic is lost. Other chapters are concerned with convergence, unbiasedness and confidence, multinomials, asymptotic normality, robustness and non-parametric procedures; the last being mainly devoted to a good account of the Dirichlet process. Before all this mathematics, the book begins with a brief account of the various theories of probability: logical, empirical and subjective. At the end of the account is a fascinating discussion of why the author thinks \"there is a probability 0.05 that there will be a large scale nuclear war between the U.S. and the U.S.S.R before 2000\". This connection between mathematics and reality is most warmly to be welcomed. The merit of this book lies in the novelty of the perspective presented. It is like looking at a courtyard from some unfamiliar window in an upper turret. Things look different from up there. Some corners of the courtyard are completely obscured. (It is suprising that there is no mention at all of the likelihood principle; and only an aside reference to likelihood.) Other matters are better appreciated because of the unfamiliar aspect normal means, for example. The book does not therefore present a balanced view of Bayesian theory but does provide an interesting and valuable account of many aspects of it and should command the attention of any statistical theorist.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: The weighted likelihood bootstrap (WLB) as mentioned in this paper is a generalization of the Rubin's Bayesian bootstrap, which is used to simulate the posterior distribution of a posterior distribution.
Abstract: We introduce the weighted likelihood bootstrap (WLB) as a way to simulate approximately from a posterior distribution This method is often easy to implement, requiring only an algorithm for calculating the maximum likelihood estimator, such as iteratively reweighted least squares In the generic weighting scheme, the WLB is first order correct under quite general conditions Inaccuracies can be removed by using the WLB as a source of samples in the sampling-importance resampling (SIR) algorithm, which also allows incorporation of particular prior information The SIR-adjusted WLB can be a competitive alternative to other integration methods in certain models Asymptotic expansions elucidate the second-order properties of the WLB, which is a generalization of Rubin's Bayesian bootstrap The calculation of approximate Bayes factors for model comparison is also considered We note that, given a sample simulated from the posterior distribution, the required marginal likelihood may be simulation consistently estimated by the harmonic mean of the associated likelihood values; a modification of this estimator that avoids instability is also noted These methods provide simple ways of calculating approximate Bayes factors and posterior model probabilities for a very wide class of models

1,474 citations

Book ChapterDOI
TL;DR: In this article, Mertens and Zamir have shown how one can give a complete description of the "type" of a player in an incomplete information game in terms of a full hierarchy of beliefs at all levels.
Abstract: Many economic problems are naturally modeled as a game of incomplete information, where a player’s payoff depends on his own action, the actions of others, and some unknown economic fundamentals. For example, many accounts of currency attacks, bank runs, and liquidity crises give a central role to players’ uncertainty about other players’ actions. Because other players’ actions in such situations are motivated by their beliefs, the decision maker must take account of the beliefs held by other players. We know from the classic contribution of Harsanyi (1967–1968) that rational behavior in such environments not only depends on economic agents’ beliefs about economic fundamentals, but also depends on beliefs of higher-order – i.e., players’ beliefs about other players’ beliefs, players’ beliefs about other players’ beliefs about other players’ beliefs, and so on. Indeed, Mertens and Zamir (1985) have shown how one can give a complete description of the “type” of a player in an incomplete information game in terms of a full hierarchy of beliefs at all levels. In principle, optimal strategic behavior should be analyzed in the space of all possible infinite hierarchies of beliefs; however, such analysis is highly complex for players and analysts alike and is likely to prove intractable in general. It is therefore useful to identify strategic environments with incomplete information that are rich enough to capture the important role of higher-order beliefs in economic settings, but simple enough to allow tractable analysis. Global games, first studied by Carlsson and van Damme (1993a), represent one such environment. Uncertain economic fundamentals are summarized by a state θ and each player observes a different signal of the state with a small amount of noise. Assuming that the noise technology is common knowledge among the players, each player’s signal generates beliefs about fundamentals, beliefs about other players’ beliefs about fundamentals, and so on. Our purpose in this paper is to describe how such models work, how global game reasoning can be applied to economic problems, and how this analysis relates to more general analysis of higher-order beliefs in strategic settings.

1,108 citations


Cites background from "Bayes theory: Hartigan, Springer-Ve..."

  • ...The uniform prior on the real line is “improper” (i.e., has infinite probability mass), but the conditional probabilities are well defined: a player observing signal xi puts density (1/σ ) f ((xi − θ )/σ ) on state θ (see Hartigan 1983)....

    [...]

  • ...See Hartigan (1983) for a discussion of improper priors....

    [...]

Journal Article
TL;DR: This paper presents an empirical Bayes method for analysing replicated microarray data and presents the results of a simulation study estimating the ROC curve of B and three other statistics for determining differential expression: the average and two simple modifications of the usual t-statistic.
Abstract: cDNA microarrays permit us to study the expression of thousands of genes simultaneously. They are now used in many different contexts to compare mRNA levels between two or more samples of cells. Microarray experiments typically give us expression measurements on a large number of genes, say 10,000-20,000, but with few, if any, replicates for each gene. Traditional methods using means and standard deviations to detect differential expression are not completely satisfactory in this context, and so a different approach seems desirable. In this paper we present an empirical Bayes method for analysing replicated microarray data. Data from all the genes in a replicate set of experiments are combined into estimates of parameters of a prior distribution. These parameter estimates are then combined at the gene level with means and standard deviations to form a statistic B which can be used to decide whether differential expression has occurred. The statistic B avoids the problems of using averages or t-statistics. The method is illustrated using data from an experiment comparing the expression of genes in the livers of SR-BI transgenic mice with that of the corresponding wild-type mice. In addition we present the results of a simulation study estimating the ROC curve of B and three other statistics for determining differential expression: the average and two simple modifications of the usual t-statistic. B was found to be the most powerful of the four, though the margin was not great. The data were simulated to resemble the SR-BI data.

737 citations


Additional excerpts

  • ...See e.g., Hartigan (1983)....

    [...]

Posted Content
TL;DR: In this article, the authors build a model of currency crises where a single large investor and a continuum of small investors independently decide whether to attack a currency based on their private information about fundamentals.
Abstract: Do large investors increase the vulnerability of a country to speculative attacks in the foreign exchange markets? To address this issue, we build a model of currency crises where a single large investor and a continuum of small investors independently decide whether to attack a currency based on their private information about fundamentals. Even abstracting from signaling, the presence of the large investor does make all other traders more aggressive in their selling. Relative to the case in which there is no large investors, small investors attach the currency when fundamentals are stronger. Yet, the difference can be small, or null, depending on the relative precision of private information of the small and large investors. Adding signaling makes the influence of the large trader on small traders behaviour much stronger.

245 citations

Journal ArticleDOI
TL;DR: Conditional probability distributions seem to have a bad reputation when it comes to rigorous treatment of conditioning, but in print, measurability and averaging properties substitute for intuitive ideas about random variables behaving like constants given particular conditioning information.
Abstract: Conditional probability distributions seem to have a bad reputation when it comes to rigorous treatment of conditioning. Technical arguments are published as manipulations of Radon‐Nikodym derivatives, although we all secretly perform heuristic calculations using elementary definitions of conditional probabilities. In print, measurability and averaging properties substitute for intuitive ideas about random variables behaving like constants given particular conditioning information. One way to engage in rigorous, guilt-free manipulation of conditional distributions is to treat them as disintegrating measures—families of probability measures concentrating on the level sets of a conditioning statistic. In this paper we present a little theory and a range of examples—from EM algorithms and the Neyman factorization, through Bayes theory and marginalization paradoxes—to suggest that disintegrations have both intuitive appeal and the rigor needed for many problems in mathematical statistics.

228 citations

References
More filters
Journal ArticleDOI
TL;DR: The weighted likelihood bootstrap (WLB) as mentioned in this paper is a generalization of the Rubin's Bayesian bootstrap, which is used to simulate the posterior distribution of a posterior distribution.
Abstract: We introduce the weighted likelihood bootstrap (WLB) as a way to simulate approximately from a posterior distribution This method is often easy to implement, requiring only an algorithm for calculating the maximum likelihood estimator, such as iteratively reweighted least squares In the generic weighting scheme, the WLB is first order correct under quite general conditions Inaccuracies can be removed by using the WLB as a source of samples in the sampling-importance resampling (SIR) algorithm, which also allows incorporation of particular prior information The SIR-adjusted WLB can be a competitive alternative to other integration methods in certain models Asymptotic expansions elucidate the second-order properties of the WLB, which is a generalization of Rubin's Bayesian bootstrap The calculation of approximate Bayes factors for model comparison is also considered We note that, given a sample simulated from the posterior distribution, the required marginal likelihood may be simulation consistently estimated by the harmonic mean of the associated likelihood values; a modification of this estimator that avoids instability is also noted These methods provide simple ways of calculating approximate Bayes factors and posterior model probabilities for a very wide class of models

1,474 citations

Book ChapterDOI
TL;DR: In this article, Mertens and Zamir have shown how one can give a complete description of the "type" of a player in an incomplete information game in terms of a full hierarchy of beliefs at all levels.
Abstract: Many economic problems are naturally modeled as a game of incomplete information, where a player’s payoff depends on his own action, the actions of others, and some unknown economic fundamentals. For example, many accounts of currency attacks, bank runs, and liquidity crises give a central role to players’ uncertainty about other players’ actions. Because other players’ actions in such situations are motivated by their beliefs, the decision maker must take account of the beliefs held by other players. We know from the classic contribution of Harsanyi (1967–1968) that rational behavior in such environments not only depends on economic agents’ beliefs about economic fundamentals, but also depends on beliefs of higher-order – i.e., players’ beliefs about other players’ beliefs, players’ beliefs about other players’ beliefs about other players’ beliefs, and so on. Indeed, Mertens and Zamir (1985) have shown how one can give a complete description of the “type” of a player in an incomplete information game in terms of a full hierarchy of beliefs at all levels. In principle, optimal strategic behavior should be analyzed in the space of all possible infinite hierarchies of beliefs; however, such analysis is highly complex for players and analysts alike and is likely to prove intractable in general. It is therefore useful to identify strategic environments with incomplete information that are rich enough to capture the important role of higher-order beliefs in economic settings, but simple enough to allow tractable analysis. Global games, first studied by Carlsson and van Damme (1993a), represent one such environment. Uncertain economic fundamentals are summarized by a state θ and each player observes a different signal of the state with a small amount of noise. Assuming that the noise technology is common knowledge among the players, each player’s signal generates beliefs about fundamentals, beliefs about other players’ beliefs about fundamentals, and so on. Our purpose in this paper is to describe how such models work, how global game reasoning can be applied to economic problems, and how this analysis relates to more general analysis of higher-order beliefs in strategic settings.

1,108 citations

Journal Article
TL;DR: This paper presents an empirical Bayes method for analysing replicated microarray data and presents the results of a simulation study estimating the ROC curve of B and three other statistics for determining differential expression: the average and two simple modifications of the usual t-statistic.
Abstract: cDNA microarrays permit us to study the expression of thousands of genes simultaneously. They are now used in many different contexts to compare mRNA levels between two or more samples of cells. Microarray experiments typically give us expression measurements on a large number of genes, say 10,000-20,000, but with few, if any, replicates for each gene. Traditional methods using means and standard deviations to detect differential expression are not completely satisfactory in this context, and so a different approach seems desirable. In this paper we present an empirical Bayes method for analysing replicated microarray data. Data from all the genes in a replicate set of experiments are combined into estimates of parameters of a prior distribution. These parameter estimates are then combined at the gene level with means and standard deviations to form a statistic B which can be used to decide whether differential expression has occurred. The statistic B avoids the problems of using averages or t-statistics. The method is illustrated using data from an experiment comparing the expression of genes in the livers of SR-BI transgenic mice with that of the corresponding wild-type mice. In addition we present the results of a simulation study estimating the ROC curve of B and three other statistics for determining differential expression: the average and two simple modifications of the usual t-statistic. B was found to be the most powerful of the four, though the margin was not great. The data were simulated to resemble the SR-BI data.

737 citations

Journal ArticleDOI
TL;DR: In this article, the authors build a model of currency crises where a single large investor and a continuum of small investors independently decide whether to attack a currency based on their private information about fundamentals.
Abstract: Do large investors increase the vulnerability of a country to speculative attacks in the foreign exchange markets? To address this issue, we build a model of currency crises where a single large investor and a continuum of small investors independently decide whether to attack a currency based on their private information about fundamentals. Even abstracting from signaling, the presence of the large investor does make all other traders more aggressive in their selling. Relative to the case in which there is no large investors, small investors attach the currency when fundamentals are stronger. Yet, the difference can be small, or null, depending on the relative precision of private information of the small and large investors. Adding signaling makes the influence of the large trader on small traders behaviour much stronger.

263 citations

Journal ArticleDOI
TL;DR: Conditional probability distributions seem to have a bad reputation when it comes to rigorous treatment of conditioning, but in print, measurability and averaging properties substitute for intuitive ideas about random variables behaving like constants given particular conditioning information.
Abstract: Conditional probability distributions seem to have a bad reputation when it comes to rigorous treatment of conditioning. Technical arguments are published as manipulations of Radon‐Nikodym derivatives, although we all secretly perform heuristic calculations using elementary definitions of conditional probabilities. In print, measurability and averaging properties substitute for intuitive ideas about random variables behaving like constants given particular conditioning information. One way to engage in rigorous, guilt-free manipulation of conditional distributions is to treat them as disintegrating measures—families of probability measures concentrating on the level sets of a conditioning statistic. In this paper we present a little theory and a range of examples—from EM algorithms and the Neyman factorization, through Bayes theory and marginalization paradoxes—to suggest that disintegrations have both intuitive appeal and the rigor needed for many problems in mathematical statistics.

228 citations