scispace - formally typeset
Search or ask a question
Author

Igor Prünster

Bio: Igor Prünster is an academic researcher from Bocconi University. The author has contributed to research in topics: Dirichlet process & Prior probability. The author has an hindex of 29, co-authored 106 publications receiving 3033 citations. Previous affiliations of Igor Prünster include Instituto Tecnológico Autónomo de México & University of Turin.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of determining the distribution of means of random probability measures which are obtained by normalizing increasing additive processes and find a solution by resorting to a well-known inversion formula for characteristic functions due to Gurland.
Abstract: We consider the problem of determining the distribution of means of random probability measures which are obtained by normalizing increasing additive processes. A solution is found by resorting to a well-known inversion formula for characteristic functions due to Gurland. Moreover, expressions of the posterior distributions of those means, in the presence of exchangeable observations, are given. Finally, a section is devoted to the illustration of two examples of statistical relevance.

264 citations

Book ChapterDOI
TL;DR: In this paper, the authors provide a review of Bayesian nonparametric models that go beyond the Dirichlet process, and show that in some cases of interest for statistical applications, the DPM is not an adequate prior choice.
Abstract: Bayesian nonparametric inference is a relatively young area of research and it has recently undergone a strong development. Most of its success can be explained by the considerable degree of exibility it ensures in statistical modelling, if compared to parametric alternatives, and by the emergence of new and ecient simulation techniques that make nonparametric models amenable to concrete use in a number of applied statistical problems. Since its introduction in 1973 by T.S. Ferguson, the Dirichlet process has emerged as a cornerstone in Bayesian nonparametrics. Nonetheless, in some cases of interest for statistical applications the Dirichlet process is not an adequate prior choice and alternative nonparametric models need to be devised. In this paper we provide a review of Bayesian nonparametric models that go beyond the Dirichlet process.

232 citations

Journal ArticleDOI
TL;DR: A Bayesian non‐parametric approach is taken and adopt a hierarchical model with a suitable non-parametric prior obtained from a generalized gamma process to solve the problem of determining the number of components in a mixture model.
Abstract: Summary. The paper deals with the problem of determining the number of components in a mixture model. We take a Bayesian non-parametric approach and adopt a hierarchical model with a suitable non-parametric prior for the latent structure. A commonly used model for such a problem is the mixture of Dirichlet process model. Here, we replace the Dirichlet process with a more general non-parametric prior obtained from a generalized gamma process. The basic feature of this model is that it yields a partition structure for the latent variables which is of Gibbs type. This relates to the well-known (exchangeable) product partition models. If compared with the usual mixture of Dirichlet process model the advantage of the generalization that we are examining relies on the availability of an additional parameter σ belonging to the interval (0,1): it is shown that such a parameter greatly influences the clustering behaviour of the model. A value of σ that is close to 1 generates a large number of clusters, most of which are of small size. Then, a reinforcement mechanism which is driven by σ acts on the mass allocation by penalizing clusters of small size and favouring those few groups containing a large number of elements. These features turn out to be very useful in the context of mixture modelling. Since it is difficult to specify a priori the reinforcement rate, it is reasonable to specify a prior for σ. Hence, the strength of the reinforcement mechanism is controlled by the data.

230 citations

Journal ArticleDOI
TL;DR: A comprehensive Bayesian non‐parametric analysis of random probabilities which are obtained by normalizing random measures with independent increments (NRMI), which allows to derive a generalized Blackwell–MacQueen sampling scheme, which is then adapted to cover also mixture models driven by general NRMIs.
Abstract: . One of the main research areas in Bayesian Nonparametrics is the proposal and study of priors which generalize the Dirichlet process. In this paper, we provide a comprehensive Bayesian non-parametric analysis of random probabilities which are obtained by normalizing random measures with independent increments (NRMI). Special cases of these priors have already shown to be useful for statistical applications such as mixture models and species sampling problems. However, in order to fully exploit these priors, the derivation of the posterior distribution of NRMIs is crucial: here we achieve this goal and, indeed, provide explicit and tractable expressions suitable for practical implementation. The posterior distribution of an NRMI turns out to be a mixture with respect to the distribution of a specific latent variable. The analysis is completed by the derivation of the corresponding predictive distributions and by a thorough investigation of the marginal structure. These results allow to derive a generalized Blackwell–MacQueen sampling scheme, which is then adapted to cover also mixture models driven by general NRMIs.

211 citations

Journal ArticleDOI
TL;DR: In this article, the normalized inverse-Gaussian (N-IG) prior is proposed as an alternative to the Dirichlet process to be used in Bayesian hierarchical models.
Abstract: In recent years the Dirichlet process prior has experienced a great success in the context of Bayesian mixture modeling. The idea of overcoming discreteness of its realizations by exploiting it in hierarchical models, combined with the development of suitable sampling techniques, represent one of the reasons of its popularity. In this article we propose the normalized inverse-Gaussian (N–IG) process as an alternative to the Dirichlet process to be used in Bayesian hierarchical models. The N–IG prior is constructed via its finite-dimensional distributions. This prior, although sharing the discreteness property of the Dirichlet prior, is characterized by a more elaborate and sensible clustering which makes use of all the information contained in the data. Whereas in the Dirichlet case the mass assigned to each observation depends solely on the number of times that it occurred, for the N–IG prior the weight of a single observation depends heavily on the whole number of ties in the sample. Moreover, expressio...

211 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

01 Jan 2016
TL;DR: The table of integrals series and products is universally compatible with any devices to read and is available in the book collection an online access to it is set as public so you can get it instantly.
Abstract: Thank you very much for downloading table of integrals series and products. Maybe you have knowledge that, people have look hundreds times for their chosen books like this table of integrals series and products, but end up in harmful downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they cope with some harmful virus inside their laptop. table of integrals series and products is available in our book collection an online access to it is set as public so you can get it instantly. Our book servers saves in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the table of integrals series and products is universally compatible with any devices to read.

4,085 citations

Book
01 Jan 2013
TL;DR: In this paper, the authors consider the distributional properties of Levy processes and propose a potential theory for Levy processes, which is based on the Wiener-Hopf factorization.
Abstract: Preface to the revised edition Remarks on notation 1. Basic examples 2. Characterization and existence 3. Stable processes and their extensions 4. The Levy-Ito decomposition of sample functions 5. Distributional properties of Levy processes 6. Subordination and density transformation 7. Recurrence and transience 8. Potential theory for Levy processes 9. Wiener-Hopf factorizations 10. More distributional properties Supplement Solutions to exercises References and author index Subject index.

1,957 citations

Journal ArticleDOI
01 Dec 2012-Ecology
TL;DR: An integrated sampling, rarefaction, and extrapolation methodology to compare species richness of a set of communities based on samples of equal completeness (as measured by sample coverage) instead of equal size is proposed.
Abstract: We propose an integrated sampling, rarefaction, and extrapolation methodology to compare species richness of a set of communities based on samples of equal completeness (as measured by sample coverage) instead of equal size. Traditional rarefaction or extrapolation to equal-sized samples can misrepresent the relationships between the richnesses of the communities being compared because a sample of a given size may be sufficient to fully characterize the lower diversity community, but insufficient to characterize the richer community. Thus, the traditional method systematically biases the degree of differences between community richnesses. We derived a new analytic method for seamless coverage-based rarefaction and extrapolation. We show that this method yields less biased comparisons of richness between communities, and manages this with less total sampling effort. When this approach is integrated with an adaptive coverage-based stopping rule during sampling, samples may be compared directly without rarefaction, so no extra data is taken and none is thrown away. Even if this stopping rule is not used during data collection, coverage-based rarefaction throws away less data than traditional size-based rarefaction, and more efficiently finds the correct ranking of communities according to their true richnesses. Several hypothetical and real examples demonstrate these advantages.

1,316 citations

Posted Content
TL;DR: Concrete random variables---continuous relaxations of discrete random variables is a new family of distributions with closed form densities and a simple reparameterization, and the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks is demonstrated.
Abstract: The reparameterization trick enables optimizing large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack useful reparameterizations due to the discontinuous nature of discrete states. In this work we introduce Concrete random variables---continuous relaxations of discrete random variables. The Concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, Concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-probability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks.

1,120 citations