scispace - formally typeset
Search or ask a question

Showing papers on "Natural exponential family published in 2019"


Journal ArticleDOI
TL;DR: A new class of (probability) distributions, based on a cosine-sine transformation, obtained by compounding a baseline distribution with cosine and sine functions is introduced, showing a better fit in comparison to some existing distributions based on some goodness-of-fit tests.
Abstract: In this paper, we introduce a new class of (probability) distributions, based on a cosine-sine transformation, obtained by compounding a baseline distribution with cosine and sine functions. Some of its properties are explored. A special focus is given to a particular cosine-sine transformation using the exponential distribution as baseline. Estimations of parameters of a particular cosine-sine exponential distribution are performed via the maximum likelihood estimation method. A simulation study investigates the performances of these estimates. Applications are given for four real data sets, showing a better fit in comparison to some existing distributions based on some goodness-of-fit tests.

28 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of computing tail probabilities of the distribution of a random sum of positive random variables, where the individual claim variables follow a reproducible natural exponential family (NEF) distribution and the random number has a NEF counting distribution with a cubic variance function.
Abstract: In this paper we consider the problem of computing tail probabilities of the distribution of a random sum of positive random variables. We assume that the individual claim variables follow a reproducible natural exponential family (NEF) distribution, and that the random number has a NEF counting distribution with a cubic variance function. This specific modeling is supported by data of the aggregated claim distribution of an insurance company. Large tail probabilities are important as they reflect the risk of large losses, however, analytic or numerical expressions are not available. We propose several simulation algorithms which are based on an asymptotic analysis of the distribution of the counting variable and on the reproducibility property of the claim distribution. The aggregated sum is simulated efficiently by importance sampling using an exponential change of measure. We conclude by numerical experiments of these algorithms, based on real car insurance claim data.

13 citations


Posted Content
TL;DR: It is demonstrated that fitting the generative model to learn the latent stimulus that underlies neural spiking data leads to better goodness-of-fit compared to other baselines, and competitive performance compared to state-of theart algorithms for supervised Poisson image denoising, with significantly fewer parameters.
Abstract: We introduce a class of auto-encoder neural networks tailored to data from the natural exponential family (e.g., count data). The architectures are inspired by the problem of learning the filters in a convolutional generative model with sparsity constraints, often referred to as convolutional dictionary learning (CDL). Our work is the first to combine ideas from convolutional generative models and deep learning for data that are naturally modeled with a non-Gaussian distribution (e.g., binomial and Poisson). This perspective provides us with a scalable and flexible framework that can be re-purposed for a wide range of tasks and assumptions on the generative model. Specifically, the iterative optimization procedure for solving CDL, an unsupervised task, is mapped to an unfolded and constrained neural network, with iterative adjustments to the inputs to account for the generative distribution. We also show that the framework can easily be extended for discriminative training, appropriate for a supervised task. We demonstrate 1) that fitting the generative model to learn, in an unsupervised fashion, the latent stimulus that underlies neural spiking data leads to better goodness-of-fit compared to other baselines, 2) competitive performance compared to state-of-the-art algorithms for supervised Poisson image denoising, with significantly fewer parameters, and 3) gradient dynamics of shallow binomial auto-encoder.

9 citations


Journal ArticleDOI
TL;DR: Two probability distributions are analyzed which are formed by compounding inverse Weibull with zero-truncated Poisson and geometric distributions and are found to exhibit both monotone and non-monotone failure rates.
Abstract: In this paper two probability distributions are analyzed which are formed by compounding inverse Weibull with zero-truncated Poisson and geometric distributions. The distributions can be us...

8 citations


Journal ArticleDOI
TL;DR: In this paper, different stochastic ageing properties of the Transformed-Transformer family of distributions have been studied, as well as different orderings of this family of distribution.
Abstract: SYNOPTIC ABSTRACTThe Transformed-Transformer family of distributions are the resulting family of distributions as transformed from a random variable T through another transformer random variable X using a weight function ω of the cumulative distribution function of X. In this article, we study different stochastic ageing properties, as well as different stochastic orderings of this family of distributions. We discuss the results with several well known distributions.

2 citations


Journal ArticleDOI
TL;DR: A characterization of the Riesz measure and a Wishart exponential family on homogeneous cones through the invariance property of a natural exponential family under the action of the triangular group is given in this article.
Abstract: In the paper we present a characterization theorem of the Riesz measure and a Wishart exponential family on homogeneous cones through the invariance property of a natural exponential family under the action of the triangular group.

1 citations