scispace - formally typeset
Search or ask a question

Showing papers on "Poisson distribution published in 2013"


Journal ArticleDOI
TL;DR: This work extends Poisson surface reconstruction to explicitly incorporate the points as interpolation constraints and presents several algorithmic improvements that together reduce the time complexity of the solver to linear in the number of points, thereby enabling faster, higher-quality surface reconstructions.
Abstract: Poisson surface reconstruction creates watertight surfaces from oriented point sets. In this work we extend the technique to explicitly incorporate the points as interpolation constraints. The extension can be interpreted as a generalization of the underlying mathematical framework to a screened Poisson equation. In contrast to other image and geometry processing techniques, the screening term is defined over a sparse set of points rather than over the full domain. We show that these sparse constraints can nonetheless be integrated efficiently. Because the modified linear system retains the same finite-element discretization, the sparsity structure is unchanged, and the system can still be solved using a multigrid approach. Moreover we present several algorithmic improvements that together reduce the time complexity of the solver to linear in the number of points, thereby enabling faster, higher-quality surface reconstructions.

1,712 citations


Journal ArticleDOI
TL;DR: The Poisson regression model using a sandwich variance estimator has become a viable alternative to the logistic regression model for the analysis of prospective studies with independent binary outcomes and is extended to studies with correlated binary outcomes as arise in longitudinal or cluster randomization studies.
Abstract: The Poisson regression model using a sandwich variance estimator has become a viable alternative to the logistic regression model for the analysis of prospective studies with independent binary outcomes. The primary advantage of this approach is that it readily provides covariate-adjusted risk ratios and associated standard errors. In this article, the model is extended to studies with correlated binary outcomes as arise in longitudinal or cluster randomization studies. The key step involves a cluster-level grouping strategy for the computation of the middle term in the sandwich estimator. For a single binary exposure variable without covariate adjustment, this approach results in risk ratio estimates and standard errors that are identical to those found in the survey sampling literature. Simulation results suggest that it is reliable for studies with correlated binary data, provided the total number of clusters is at least 50. Data from observational and cluster randomized studies are used to illustrate the methods.

524 citations


Journal ArticleDOI
TL;DR: In this article, a new Stata command, traj, is introduced for fitting to longitudinal data finite (discrete) mixture models designed to identify clusters of individuals following similar progressions of some behavior or outcome over age or time.
Abstract: Group-based trajectory models are used to investigate population differences in the developmental courses of behaviors or outcomes. This note introduces a new Stata command, traj, for fitting to longitudinal data finite (discrete) mixture models designed to identify clusters of individuals following similar progressions of some behavior or outcome over age or time. Normal, Censored normal, Poisson, Zero-inflated Poisson, and Logistic distributions are supported.

523 citations


Journal ArticleDOI
TL;DR: This paper proposes to analyze downlink performance in a fixed-size cell, which is inscribed within a weighted Voronoi cell in a Poisson field of interferers, using recent applications of stochastic geometry to analyze cellular systems.
Abstract: Cellular systems are becoming more heterogeneous with the introduction of low power nodes including femtocells, relays, and distributed antennas. Unfortunately, the resulting interference environment is also becoming more complicated, making evaluation of different communication strategies challenging in both analysis and simulation. Leveraging recent applications of stochastic geometry to analyze cellular systems, this paper proposes to analyze downlink performance in a fixed-size cell, which is inscribed within a weighted Voronoi cell in a Poisson field of interferers. A nearest out-of-cell interferer, out-of-cell interferers outside a guard region, and cross-tier interferers are included in the interference calculations. Bounding the interference power as a function of distance from the cell center, the total interference is characterized through its Laplace transform. An equivalent marked process is proposed for the out-of-cell interference under additional assumptions. To facilitate simplified calculations, the interference distribution is approximated using the Gamma distribution with second order moment matching. The Gamma approximation simplifies calculation of the success probability and average rate, incorporates small-scale and large-scale fading, and works with co-tier and cross-tier interference. Simulations show that the proposed model provides a flexible way to characterize outage probability and rate as a function of the distance to the cell edge.

451 citations


Journal ArticleDOI
TL;DR: A simple, non-parametric method with resampling to account for the different sequencing depths is introduced, and it is found that the method discovers more consistent patterns than competing methods.
Abstract: We discuss the identification of features that are associated with an outcome in RNA-Sequencing (RNA-Seq) and other sequencing-based comparative genomic experiments. RNA-Seq data takes the form of counts, so models based on the normal distribution are generally unsuitable. The problem is especially challenging because different sequencing experiments may generate quite different total numbers of reads, or 'sequencing depths'. Existing methods for this problem are based on Poisson or negative binomial models: they are useful but can be heavily influenced by 'outliers' in the data. We introduce a simple, non-parametric method with resampling to account for the different sequencing depths. The new method is more robust than parametric methods. It can be applied to data with quantitative, survival, two-class or multiple-class outcomes. We compare our proposed method to Poisson and negative binomial-based methods in simulated and real data sets, and find that our method discovers more consistent patterns than competing methods.

431 citations


Book
23 Jul 2013
TL;DR: In this paper, the second-order intensity of spatial point patterns is estimated using the K-function and goodness-of-fit assessment using nearest neighbor distributions, which is based on the past Empirical and mechanistic models.
Abstract: Introduction Spatial point patterns Sampling Edge-effects Complete spatial randomness Objectives of statistical analysis The Dirichlet tessellation Monte Carlo tests Software Preliminary Testing Tests of complete spatial randomness Inter-event distances Nearest neighbor distances Point to nearest event distances Quadrat counts Scales of pattern Recommendations Methods for Sparsely Sampled Patterns General remarks Quadrat counts Distance measurements Tests of independence Recommendations Spatial Point Processes Processes and summary descriptions Second-order properties Higher order moments and nearest neighbor distributions The homogeneous Poisson process Independence and random labeling Estimation of second-order properties Displaced amacrine cells in the retina of a rabbit Estimation of nearest neighbor distributions Concluding remarks Nonparametric Methods Estimating weighted integrals of the second-order intensity Nonparametric estimation of a spatially varying intensity Analyzing replicated spatial point patterns Parametric or nonparametric methods? Models Contagious distributions Poisson cluster processes Inhomogeneous Poisson processes Cox processes Trans-Gaussian Cox processes Simple inhibition processes Markov point processes Other constructions Multivariate models Model-Fitting Using Summary Descriptions Parameter estimation using the K-function Goodness-of-fit assessment using nearest neighbor distributions Examples Parameter estimation via goodness-of-fit testing Model-Fitting Using Likelihood-Based Methods Likelihood inference for inhomogeneous Poisson processes Likelihood inference for Markov point processes Likelihood inference for Cox processes Additional reading Point Process Methods in Spatial Epidemiology Spatial clustering Spatial variation in risk Point source models Stratification and matching Disentangling heterogeneity and clustering Spatio-Temporal Point Processes Motivating examples A classification of spatio-temporal point patterns and processes Second-order properties Conditioning on the past Empirical and mechanistic models Exploratory Analysis Animation Marginal and conditional summaries Second-order properties Empirical Models and Methods Poisson processes Cox processes Log-Gaussian Cox processes Inference Gastro-intestinal illness in Hampshire, UK Concluding remarks: point processes and geostatistics Mechanistic Models and Methods Conditional intensity and likelihood Partial likelihood The 2001 foot-and-mouth epidemic in Cumbria, UK Nesting patterns of Arctic terns References

349 citations


Journal ArticleDOI
TL;DR: The exact unbiased inverse of the Anscombe transformation is introduced and it is demonstrated that this exact inverse leads to state-of-the-art results without any notable increase in the computational complexity compared to the other inverses.
Abstract: Many digital imaging devices operate by successive photon-to-electron, electron-to-voltage, and voltage-to-digit conversions. These processes are subject to various signal-dependent errors, which are typically modeled as Poisson-Gaussian noise. The removal of such noise can be effected indirectly by applying a variance-stabilizing transformation (VST) to the noisy data, denoising the stabilized data with a Gaussian denoising algorithm, and finally applying an inverse VST to the denoised data. The generalized Anscombe transformation (GAT) is often used for variance stabilization, but its unbiased inverse transformation has not been rigorously studied in the past. We introduce the exact unbiased inverse of the GAT and show that it plays an integral part in ensuring accurate denoising results. We demonstrate that this exact inverse leads to state-of-the-art results without any notable increase in the computational complexity compared to the other inverses. We also show that this inverse is optimal in the sense that it can be interpreted as a maximum likelihood inverse. Moreover, we thoroughly analyze the behavior of the proposed inverse, which also enables us to derive a closed-form approximation for it. This paper generalizes our work on the exact unbiased inverse of the Anscombe transformation, which we have presented earlier for the removal of pure Poisson noise.

320 citations


Journal ArticleDOI
TL;DR: A novel metric, the deployment gain, is introduced and it is demonstrated how it can be used to estimate the coverage performance and average rate achieved by a data set.
Abstract: The spatial structure of base stations (BSs) in cellular networks plays a key role in evaluating the downlink performance. In this paper, different spatial stochastic models (the Poisson point process (PPP), the Poisson hard-core process (PHCP), the Strauss process (SP), and the perturbed triangular lattice) are used to model the structure by fitting them to the locations of BSs in real cellular networks obtained from a public database. We provide two general approaches for fitting. One is fitting by the method of maximum pseudolikelihood. As for the fitted models, it is not sufficient to distinguish them conclusively by some classical statistics. We propose the coverage probability as the criterion for the goodness-of-fit. In terms of coverage, the SP provides a better fit than the PPP and the PHCP. The other approach is fitting by the method of minimum contrast that minimizes the average squared error of the coverage probability. This way, fitted models are obtained whose coverage performance matches that of the given data set very accurately. Furthermore, we introduce a novel metric, the deployment gain, and we demonstrate how it can be used to estimate the coverage performance and average rate achieved by a data set.

308 citations


01 Jan 2013
TL;DR: The Chen-Stein method of Poisson approximation is a powerful tool for computing an error bound when approximating probabilities using the Poisson distribution as mentioned in this paper, in many cases, this bound may be given in terms of first and second moments alone.
Abstract: The Chen-Stein method of Poisson approximation is a powerful tool for computing an error bound when approximating probabilities using the Poisson distribution. In many cases, this bound may be given in terms of first and second moments alone. We present a background of the method and state some fundamental Poisson approximation theorems. The body of this paper is an illustration, through varied examples, of the wide applica- bility and utility of the Chen-Stein method. These examples include birth- day coincidences, head runs in coin tosses, random graphs, maxima of normal variates and random permutations and mappings. We conclude with an application to molecular biology. The variety of examples presented here does not exhaust the range of possible applications of the Chen-Stein method.

277 citations


Proceedings ArticleDOI
14 Apr 2013
TL;DR: In this article, the authors present a Poisson-convergence result for a broad range of stationary (including lattice) networks subject to log-normal shadowing of increasing variance.
Abstract: An almost ubiquitous assumption made in the stochastic-analytic approach to study of the quality of user-service in cellular networks is Poisson distribution of base stations, often completed by some specific assumption regarding the distribution of the fading (e.g. Rayleigh). The former (Poisson) assumption is usually (vaguely) justified in the context of cellular networks, by various irregularities in the real placement of base stations, which ideally should form a lattice (e.g. hexagonal) pattern. In the first part of this paper we provide a different and rigorous argument justifying the Poisson assumption under sufficiently strong lognormal shadowing observed in the network, in the evaluation of a natural class of the typical-user service-characteristics (including path-loss, interference, signal-to-interference ratio, spectral efficiency). Namely, we present a Poisson-convergence result for a broad range of stationary (including lattice) networks subject to log-normal shadowing of increasing variance. We show also for the Poisson model that the distribution of all these typical-user service characteristics does not depend on the particular form of the additional fading distribution. Our approach involves a mapping of 2D network model to 1D image of it “perceived” by the typical user. For this image we prove our Poisson convergence result and the invariance of the Poisson limit with respect to the distribution of the additional shadowing or fading. Moreover, in the second part of the paper we present some new results for Poisson model allowing one to calculate the distribution function of the SINR in its whole domain. We use them to study and optimize the mean energy efficiency in cellular networks.

253 citations


Journal ArticleDOI
TL;DR: It is found that the estimated dispersion in existing methods does not adequately capture the heterogeneity of biological variance among samples, so a new empirical Bayes shrinkage estimate of the dispersion parameters is presented and improved DE detection is demonstrated.
Abstract: Recent developments in RNA-sequencing (RNA-seq) technology have led to a rapid increase in gene expression data in the form of counts. RNA-seq can be used for a variety of applications, however, identifying differential expression (DE) remains a key task in functional genomics. There have been a number of statistical methods for DE detection for RNA-seq data. One common feature of several leading methods is the use of the negative binomial (Gamma-Poisson mixture) model. That is, the unobserved gene expression is modeled by a gamma random variable and, given the expression, the sequencing read counts are modeled as Poisson. The distinct feature in various methods is how the variance, or dispersion, in the Gamma distribution is modeled and estimated. We evaluate several large public RNA-seq datasets and find that the estimated dispersion in existing methods does not adequately capture the heterogeneity of biological variance among samples. We present a new empirical Bayes shrinkage estimate of the dispersion parameters and demonstrate improved DE detection.

Posted Content
TL;DR: This paper derived fixed effects estimators of parameters and average partial effects in (possibly dynamic) nonlinear panel data models with individual and time effects, using analytical and jackknife bias corrections to deal with the incidental parameter problem.
Abstract: We derive fixed effects estimators of parameters and average partial effects in (possibly dynamic) nonlinear panel data models with individual and time effects. They cover logit, probit, ordered probit, Poisson and Tobit models that are important for many empirical applications in micro and macroeconomics. Our estimators use analytical and jackknife bias corrections to deal with the incidental parameter problem, and are asymptotically unbiased under asymptotic sequences where $N/T$ converges to a constant. We develop inference methods and show that they perform well in numerical examples.

Journal ArticleDOI
TL;DR: Results suggest greater mixing of residences and commercial land uses is associated with higher pedestrian crash risk across different severity levels, ceteris paribus, presumably since such access produces more potential conflicts between pedestrian and vehicle movements and Interestingly, network densities show variable effects, and sidewalk provision isassociated with lower severe-crash rates.

Journal ArticleDOI
TL;DR: In this paper, a new molecular modulation scheme for nanonetworks is proposed, and the error probability of the proposed scheme as well as that of two previously known schemes, the concentration and molecular shift keying modulations, are derived for the Poisson model by taking into account the error propagation effect of previously decoded symbols.
Abstract: In this letter, a new molecular modulation scheme for nanonetworks is proposed. To evaluate the scheme, a system model based on the Poisson distribution is introduced. The error probability of the proposed scheme as well as that of two previously known schemes, the concentration and molecular shift keying modulations, are derived for the Poisson model by taking into account the error propagation effect of previously decoded symbols. The proposed scheme is shown to outperform the previously introduced schemes. This is due to the fact that the decoding of the current symbol in the proposed scheme does not encounter propagation of error, as the decoding of the current symbol does not depend on the previously transmitted and decoded symbols. Finally, fundamental limits on the probability of error of a practical set of encoders and decoders are derived using information theoretical tools.

Journal ArticleDOI
TL;DR: It turns out that the local delay behaves rather differently in the two cases of high mobility and no mobility, and the low- and high-rate asymptotic behavior of the minimum achievable delay in each case is provided.
Abstract: Communication between two neighboring nodes is a very basic operation in wireless networks. Yet very little research has focused on the local delay in networks with randomly placed nodes, defined as the mean time it takes a node to connect to its nearest neighbor. We study this problem for Poisson networks, first considering interference only, then noise only, and finally and briefly, interference plus noise. In the noiseless case, we analyze four different types of nearest-neighbor communication and compare the extreme cases of high mobility, where a new Poisson process is drawn in each time slot, and no mobility, where only a single realization exists and nodes stay put forever. It turns out that the local delay behaves rather differently in the two cases. We also provide the low- and high-rate asymptotic behavior of the minimum achievable delay in each case. In the cases with noise, power control is essential to keep the delay finite, and randomized power control can drastically reduce the required (mean) power for finite local delay.

Journal ArticleDOI
TL;DR: Several likelihood-based inferential methods for the Tweedie compound Poisson mixed model that enable estimation of the variance function from the data are presented.
Abstract: The Tweedie compound Poisson distribution is a subclass of the exponential dispersion family with a power variance function, in which the value of the power index lies in the interval (1,2). It is well known that the Tweedie compound Poisson density function is not analytically tractable, and numerical procedures that allow the density to be accurately and fast evaluated did not appear until fairly recently. Unsurprisingly, there has been little statistical literature devoted to full maximum likelihood inference for Tweedie compound Poisson mixed models. To date, the focus has been on estimation methods in the quasi-likelihood framework. Further, Tweedie compound Poisson mixed models involve an unknown variance function, which has a significant impact on hypothesis tests and predictive uncertainty measures. The estimation of the unknown variance function is thus of independent interest in many applications. However, quasi-likelihood-based methods are not well suited to this task. This paper presents several likelihood-based inferential methods for the Tweedie compound Poisson mixed model that enable estimation of the variance function from the data. These algorithms include the likelihood approximation method, in which both the integral over the random effects and the compound Poisson density function are evaluated numerically; and the latent variable approach, in which maximum likelihood estimation is carried out via the Monte Carlo EM algorithm, without the need for approximating the density function. In addition, we derive the corresponding Markov Chain Monte Carlo algorithm for a Bayesian formulation of the mixed model. We demonstrate the use of the various methods through a numerical example, and conduct an array of simulation studies to evaluate the statistical properties of the proposed estimators.

Posted Content
TL;DR: A variational inference algorithm for approximate posterior inference that scales up to massive data sets and is an efficient algorithm that iterates over the observed entries and adjusts an approximate posterior over the user/item representations.
Abstract: We develop a Bayesian Poisson matrix factorization model for forming recommendations from sparse user behavior data. These data are large user/item matrices where each user has provided feedback on only a small subset of items, either explicitly (e.g., through star ratings) or implicitly (e.g., through views or purchases). In contrast to traditional matrix factorization approaches, Poisson factorization implicitly models each user's limited attention to consume items. Moreover, because of the mathematical form of the Poisson likelihood, the model needs only to explicitly consider the observed entries in the matrix, leading to both scalable computation and good predictive performance. We develop a variational inference algorithm for approximate posterior inference that scales up to massive data sets. This is an efficient algorithm that iterates over the observed entries and adjusts an approximate posterior over the user/item representations. We apply our method to large real-world user data containing users rating movies, users listening to songs, and users reading scientific papers. In all these settings, Bayesian Poisson factorization outperforms state-of-the-art matrix factorization methods.

Posted Content
TL;DR: In this paper, the authors consider a general sub-class of graphical models where the node-wise conditional distributions arise from exponential families and derive multivariate graphical model distributions from univariate exponential family distributions, such as the Poisson, negative binomial and exponential distributions.
Abstract: Undirected graphical models, or Markov networks, are a popular class of statistical models, used in a wide variety of applications. Popular instances of this class include Gaussian graphical models and Ising models. In many settings, however, it might not be clear which subclass of graphical models to use, particularly for non-Gaussian and non-categorical data. In this paper, we consider a general sub-class of graphical models where the node-wise conditional distributions arise from exponential families. This allows us to derive multivariate graphical model distributions from univariate exponential family distributions, such as the Poisson, negative binomial, and exponential distributions. Our key contributions include a class of M-estimators to fit these graphical model distributions; and rigorous statistical analysis showing that these M-estimators recover the true graphical model structure exactly, with high probability. We provide examples of genomic and proteomic networks learned via instances of our class of graphical models derived from Poisson and exponential distributions.

Journal ArticleDOI
TL;DR: In this paper, the authors develop the large deviation theory for small Poisson noise perturbations of a general class of deterministic infinite dimensional models, which is based on a variational representation for nonnegative functionals of general Poisson random measures.

Journal ArticleDOI
TL;DR: In this article, a non-stationary peaks-over-threshold (POT) model with climatic covariates for these heavy rainfall events is developed, and a regional sample of events exceeding the threshold of 100 mm/d is built using daily precipitation data recorded at 44 stations over the period 1958-2008.
Abstract: Heavy rainfall events often occur in southern French Mediterranean regions during the autumn, leading to catastrophic flood events. A non-stationary peaks-over-threshold (POT) model with climatic covariates for these heavy rainfall events is developed herein. A regional sample of events exceeding the threshold of 100 mm/d is built using daily precipitation data recorded at 44 stations over the period 1958–2008. The POT model combines a Poisson distribution for the occurrence and a generalized Pareto distribution for the magnitude of the heavy rainfall events. The selected covariates are the seasonal occurrence of southern circulation patterns for the Poisson distribution parameter, and monthly air temperature for the generalized Pareto distribution scale parameter. According to the deviance test, the non-stationary model provides a better fit to the data than a classical stationary model. Such a model incorporating climatic covariates instead of time allows one to re-evaluate the risk of extreme ...

Journal ArticleDOI
TL;DR: This paper investigates the first passage times to flat boundaries for hyper-exponential jump (diffusion) processes and presents explicit expressions of the dividend formulae for barrier strategy and threshold strategy.

Posted Content
TL;DR: PReMiuM is a recently developed R package for Bayesian clustering using a Dirichlet process mixture model, an alternative to regression models, non-parametrically linking a response vector to covariate data through cluster membership.
Abstract: PReMiuM is a recently developed R package for Bayesian clustering using a Dirichlet process mixture model. This model is an alternative to regression models, non-parametrically linking a response vector to covariate data through cluster membership. The package allows Bernoulli, Binomial, Poisson, Normal and categorical response, as well as Normal and discrete covariates. Additionally, predictions may be made for the response, and missing values for the covariates are handled. Several samplers and label switching moves are implemented along with diagnostic tools to assess convergence. A number of R functions for post-processing of the output are also provided. In addition to fitting mixtures, it may additionally be of interest to determine which covariates actively drive the mixture components. This is implemented in the package as variable selection.

Journal ArticleDOI
TL;DR: A novel method for inferring gene networks from sequencing data: the Local Poisson Graphical Model, which assumes a Local Markov property where each variable conditional on all other variables is Poisson distributed and develops a neighborhood selection algorithm to fit the model locally.
Abstract: Gaussian graphical models, a class of undirected graphs or Markov Networks, are often used to infer gene networks based on microarray expression data. Many scientists, however, have begun using high-throughput sequencing technologies such as RNA-sequencing or next generation sequencing to measure gene expression. As the resulting data consists of counts of sequencing reads for each gene, Gaussian graphical models are not optimal for this discrete data. In this paper, we propose a novel method for inferring gene networks from sequencing data: the Local Poisson Graphical Model. Our model assumes a Local Markov property where each variable conditional on all other variables is Poisson distributed. We develop a neighborhood selection algorithm to fit our model locally by performing a series of l1 penalized Poisson, or log-linear, regressions. This yields a fast parallel algorithm for estimating networks from next generation sequencing data. In simulations, we illustrate the effectiveness of our methods for recovering network structure from count data. A case study on breast cancer microRNAs (miRNAs), a novel application of graphical models, finds known regulators of breast cancer genes and discovers novel miRNA clusters and hubs that are targets for future research.

Journal ArticleDOI
TL;DR: In this article, the authors considered a compound Poisson risk model for the surplus process where the process and hence ruin can only be observed at random observation times, and derived explicit expressions for the discounted penalty function at ruin.
Abstract: In the framework of collective risk theory, we consider a compound Poisson risk model for the surplus process where the process (and hence ruin) can only be observed at random observation times. For Erlang(n) distributed inter-observation times, explicit expressions for the discounted penalty function at ruin are derived. The resulting model contains both the usual continuous-time and the discrete-time risk model as limiting cases, and can be used as an effective approximation scheme for the latter. Numerical examples are given that illustrate the effect of random observation times on various ruin-related quantities.

Journal ArticleDOI
TL;DR: In this paper, a U-statistic of a Poisson point process is defined as the sum ∑f(x 1, 1, 2, 3, 4, 5, 6) over all k-tuples of distinct points of the point process, and the Wiener-Ito chaos expansion of such a functional is used to derive a formula for the variance.
Abstract: A U-statistic of a Poisson point process is defined as the sum ∑f(x1,…,xk) over all (possibly infinitely many) k-tuples of distinct points of the point process. Using the Malliavin calculus, the Wiener–Ito chaos expansion of such a functional is computed and used to derive a formula for the variance. Central limit theorems for U-statistics of Poisson point processes are shown, with explicit bounds for the Wasserstein distance to a Gaussian random variable. As applications, the intersection process of Poisson hyperplanes and the length of a random geometric graph are investigated.

Journal ArticleDOI
TL;DR: In this article, a model that extends a Tweedie generalised linear model is proposed to estimate the weight of a given fish species in a fish trawl, where both the number of gamma variates and their average size are modelled separately.
Abstract: The statistical analysis of continuous data that is non-negative is a common task in quantitative ecology. An example, and our motivation, is the weight of a given fish species in a fish trawl. The analysis task is complicated by the occurrence of exactly zero observations. It makes many statistical methods for continuous data inappropriate. In this paper we propose a model that extends a Tweedie generalised linear model. The proposed model exploits the fact that a Tweedie distribution is equivalent to the distribution obtained by summing a Poisson number of gamma random variables. In the proposed model, both the number of gamma variates, and their average size, are modelled separately. The model has a composite link and has a flexible mean-variance relationship that can vary with covariates. We illustrate the model, and compare it to other models, using data from a fish trawl survey in south-east Australia.

Journal ArticleDOI
TL;DR: It was found that the rankings of the fixed-over-time random effects models are very consistent among them, and the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking.

Journal ArticleDOI
TL;DR: This article proposes Poisson quasi-likelihood estimators, which are consistent in the presence of excess zeros without having to specify the full distribution, and illustrated in a series of Monte Carlo simulations and in an application to the demand for health services.
Abstract: Applications of zero-inflated count data models have proliferated in health economics. However, zero-inflated Poisson or zero-inflated negative binomial maximum likelihood estimators are not robust to misspecification. This article proposes Poisson quasi-likelihood estimators as an alternative. These estimators are consistent in the presence of excess zeros without having to specify the full distribution. The advantages of the Poisson quasi-likelihood approach are illustrated in a series of Monte Carlo simulations and in an application to the demand for health services.

Journal ArticleDOI
TL;DR: This paper proposes an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators and derives a link that relates the proximal map of an lp norm with the proximate map of a Schatten matrix norm of order p.
Abstract: Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an lp norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

Book
20 Nov 2013
TL;DR: This chapter discusses the properties of the MMSE in Gaussian Noise, Mutual Information and MMSE, and applications of I-MMSE in Discrete- and Continuous-time Gaussian Channels.
Abstract: 1. Introduction 2: Basic Information and Estimation Measures 3: Properties of the MMSE in Gaussian Noise 4: Mutual Information and MMSE: Basic Relationship 5: Mutual Information and MMSE in Discrete- and Continuous-time Gaussian Channels 6: Entropy, Relative Entropy, Fisher Information, and Mismatched Estimation 7: Applications of I-MMSE 8: Information and Estimation Measures in Poisson Models and Channels 9: Beyond Gaussian and Poisson Models 10: Outlook. Acknowledgements. Appendices. References.