scispace - formally typeset
Search or ask a question

Showing papers on "Poisson distribution published in 2011"


Journal ArticleDOI
TL;DR: On the bicentenary of the publication of Poisson's Traité de Mécanique, the continuing relevance of Poissons's ratio in the understanding of the mechanical characteristics of modern materials is reviewed.
Abstract: In comparing a material's resistance to distort under mechanical load rather than to alter in volume, Poisson's ratio offers the fundamental metric by which to compare the performance of any material when strained elastically. The numerical limits are set by ½ and -1, between which all stable isotropic materials are found. With new experiments, computational methods and routes to materials synthesis, we assess what Poisson's ratio means in the contemporary understanding of the mechanical characteristics of modern materials. Central to these recent advances, we emphasize the significance of relationships outside the elastic limit between Poisson's ratio and densification, connectivity, ductility and the toughness of solids; and their association with the dynamic properties of the liquids from which they were condensed and into which they melt.

1,625 citations


Journal ArticleDOI
TL;DR: In this article, the authors extend the simulation results in Santos Silva and Tenreyro (2006, The log of gravity, The Review of Economics and Statistics, 88, 641-658) by considering a novel data-generating process.

764 citations


Book
04 Aug 2011
TL;DR: Michael Mitchell’s Data Management Using Stata comprehensively covers data-management tasks, from those a beginning statistician would need to those hard-to-verbalize tasks that can confound an experienced user.
Abstract: Introduction Goals A brief review of the Cox proportional hazards model Beyond the Cox model Why parametric models? Why not standard parametric models? A brief introduction to stpm Basic relationships in survival analysis Comparing models The delta method Ado-file resources How our book is organized Using stset and stsplit What is the stset command? Some key concepts Syntax of the stset command Variables created by the stset command Examples of using stset The stsplit command Conclusion Graphical introduction to the principal datasets Introduction Rotterdam breast cancer data England and Wales breast cancer data Orchiectomy data Conclusion Poisson models Introduction Modeling rates with the Poisson distribution Splitting the time scale Collapsing the data to speed up computation Splitting at unique failure times Comparing a different number of intervals Fine splitting of the time scale Splines: Motivation and definition FPs: Motivation and definition Discussion Royston-Parmar models Motivation and introduction Proportional hazards models Selecting a spline function PO models Probit models Royston-Parmar (RP) models Concluding remarks Prognostic models Introduction Developing and reporting a prognostic model What does the baseline hazard function mean? Model selection Quantitative outputs from the model Goodness of fit Out-of-sample prediction: Concept and applications Visualization of survival times Discussion Time-dependent effects Introduction Definitions What do we mean by a TD effect? Proportional on which scale? Poisson models with TD effects RP models with TD effects TD effects for continuous variables Attained age as the time scale Multiple time scales Prognostic models with TD effects Discussion Relative survival Introduction What is relative survival? Excess mortality and relative survival Motivating example Life-table estimation of relative survival Poisson models for relative survival RP models for relative survival Some comments on model selection Age as a continuous variable Concluding remarks Further topics Introduction Number needed to treat Average and adjusted survival curves Modeling distributions with RP models Multiple events Bayesian RP models Competing risks Period analysis Crude probability of death from relative survival models Final remarks References Author index Subject index

392 citations


Book
02 Mar 2011
TL;DR: In this paper, the authors propose a measure and integration approach for probability spaces in the context of convergence and convergence conditions, using Martingales and stochastic stochastics.
Abstract: Preface- Measure and Integration- Probability Spaces- Convergence- Conditioning- Martingales and Stochastics- Poisson Random Measures- Levy Processes- Index- Bibliography

382 citations


Journal ArticleDOI
01 Jul 2011-Ecology
TL;DR: A parameterization of the negative binomial distribution is proposed, where two overdispersion parameters are introduced to allow for various quadratic mean-variance relationships, including the ones assumed in the most commonly used approaches.
Abstract: A Poisson process is a commonly used starting point for modeling stochastic variation of ecological count data around a theoretical expectation. However, data typically show more variation than implied by the Poisson distribution. Such overdispersion is often accounted for by using models with different assumptions about how the variance changes with the expectation. The choice of these assumptions can naturally have apparent consequences for statistical inference. We propose a parameterization of the negative binomial distribution, where two overdispersion parameters are introduced to allow for various quadratic mean-variance relationships, including the ones assumed in the most commonly used approaches. Using bird migration as an example, we present hypothetical scenarios on how overdispersion can arise due to sampling, flocking behavior or aggregation, environmental variability, or combinations of these factors. For all considered scenarios, mean-variance relationships can be appropriately described by the negative binomial distribution with two overdispersion parameters. To illustrate, we apply the model to empirical migration data with a high level of overdispersion, gaining clearly different model fits with different assumptions about mean-variance relationships. The proposed framework can be a useful approximation for modeling marginal distributions of independent count data in likelihood-based analyses.

366 citations


Journal ArticleDOI
TL;DR: This work introduces optimal inverses for the Anscombe transformation, in particular the exact unbiased inverse, a maximum likelihood (ML) inverse, and a more sophisticated minimum mean square error (MMSE) inverse.
Abstract: The removal of Poisson noise is often performed through the following three-step procedure. First, the noise variance is stabilized by applying the Anscombe root transformation to the data, producing a signal in which the noise can be treated as additive Gaussian with unitary variance. Second, the noise is removed using a conventional denoising algorithm for additive white Gaussian noise. Third, an inverse transformation is applied to the denoised signal, obtaining the estimate of the signal of interest. The choice of the proper inverse transformation is crucial in order to minimize the bias error which arises when the nonlinear forward transformation is applied. We introduce optimal inverses for the Anscombe transformation, in particular the exact unbiased inverse, a maximum likelihood (ML) inverse, and a more sophisticated minimum mean square error (MMSE) inverse. We then present an experimental analysis using a few state-of-the-art denoising algorithms and show that the estimation can be consistently improved by applying the exact unbiased inverse, particularly at the low-count regime. This results in a very efficient filtering solution that is competitive with some of the best existing methods for Poisson image denoising.

341 citations


01 Jan 2011
TL;DR: In this paper, a new proof of the L 1 log-Sobolev inequality on the path space of Poisson point processes is presented, which avoids the use of the martingale representation on Poisson spaces.
Abstract: a b s t r a c t In this note we provide a new proof of the L 1 log-Sobolev inequality on the path space of Poisson point processes. Our proof is elementary in the sense that it avoids the use of the martingale representation on Poisson spaces. Moreover, the weak Poincare inequality for the weighted Dirichlet form is presented.

337 citations


Journal ArticleDOI
TL;DR: The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.
Abstract: Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.

311 citations


Journal ArticleDOI
TL;DR: The characteristics of Poisson model and the related models that have been developed to handle overdispersion or zero-inflation (zero-inflated Poisson (ZIP) and Poisson hurdle (PH) models) or both are reviewed.
Abstract: Background: In clinical trials of behavioral health interventions, outcome variables often take the form of counts, such as days using substances or episodes of unprotected sex. Classically, count data follow a Poisson distribution; however, in practice such data often display greater heterogeneity in the form of excess zeros (zero-inflation) or greater spread in the values (overdispersion) or both. Greater sample heterogeneity may be especially common in community-based effectiveness trials, where broad eligibility criteria are implemented to achieve a generalizable sample. Objectives: This article reviews the characteristics of Poisson model and the related models that have been developed to handle overdispersion (negative binomial (NB) model) or zero-inflation (zero-inflated Poisson (ZIP) and Poisson hurdle (PH) models) or both (zero-inflated negative binomial (ZINB) and negative binomial hurdle (NBH) models). Methods: All six models were used to model the effect of an HIV-risk reduction intervention o...

265 citations


Journal ArticleDOI
TL;DR: In this paper, a Poisson tensor factorization (CP-APR) algorithm is proposed for sparse count data, which is based on a majorization-minimization approach.
Abstract: Tensors have found application in a variety of fields, ranging from chemometrics to signal processing and beyond. In this paper, we consider the problem of multilinear modeling of sparse count data. Our goal is to develop a descriptive tensor factorization model of such data, along with appropriate algorithms and theory. To do so, we propose that the random variation is best described via a Poisson distribution, which better describes the zeros observed in the data as compared to the typical assumption of a Gaussian distribution. Under a Poisson assumption, we fit a model to observed data using the negative log-likelihood score. We present a new algorithm for Poisson tensor factorization called CANDECOMP-PARAFAC Alternating Poisson Regression (CP-APR) that is based on a majorization-minimization approach. It can be shown that CP-APR is a generalization of the Lee-Seung multiplicative updates. We show how to prevent the algorithm from converging to non-KKT points and prove convergence of CP-APR under mild conditions. We also explain how to implement CP-APR for large-scale sparse tensors and present results on several data sets, both real and simulated.

226 citations


Posted Content
TL;DR: It turns out that despite the similarity of the two models, they behave rather differently, for type I, the excess interference increases exponentially in the hard-core distance, while for type II, the gap never exceeds 1 dB.
Abstract: Matern hard core processes of types I and II are the point processes of choice to model concurrent transmitters in CSMA networks. We determine the mean interference observed at a node of the process and compare it with the mean interference in a Poisson point process of the same density. It turns out that despite the similarity of the two models, they behave rather differently. For type I, the excess interference (relative to the Poisson case) increases exponentially in the hard-core distance, while for type II, the gap never exceeds 1 dB.

Book
18 Sep 2011
TL;DR: In this article, the authors consider the problem of estimating the residual lifetime distribution and its mean for a set of classes of distributions and derive bounds on the ratio of discrete tail probabilities.
Abstract: 1 Introduction.- 2 Reliability background.- 2.1 The failure rate.- 2.2 Equilibrium distributions.- 2.3 The residual lifetime distribution and its mean.- 2.4 Other classes of distributions.- 2.5 Discrete reliability classes.- 2.6 Bounds on ratios of discrete tail probabilities.- 3 Mixed Poisson distributions.- 3.1 Tails of mixed Poisson distributions.- 3.2 The radius of convergence.- 3.3 Bounds on ratios of tail probabilities.- 3.4 Asymptotic tail behaviour of mixed Poisson distributions.- 4 Compound distributions.- 4.1 Introduction and examples.- 4.2 The general upper bound.- 4.3 The general lower bound.- 4.4 A Wald-type martingale approach.- 5 Bounds based on reliability classifications.- 5.1 First order properties.- 5.2 Bounds based on equilibrium properties.- 6 Parametric Bounds.- 6.1 Exponential bounds.- 6.2 Pareto bounds.- 6.3 Product based bounds.- 7 Compound geometric and related distributions.- 7.1 Compound modified geometric distributions.- 7.2 Discrete compound geometric distributions.- 7.3 Application to ruin probabilities.- 7.4 Compound negative binomial distributions.- 8 Tijms approximations.- 8.1 The asymptotic geometric case.- 8.2 The modified geometric distribution.- 8.3 Transform derivation of the approximation.- 9 Defective renewal equations.- 9.1 Some properties of defective renewal equations.- 9.2 The time of ruin and related quantities.- 9.3 Convolutions involving compound geometric distributions.- 10 The severity of ruin.- 10.1 The associated defective renewal equation.- 10.2 A mixture representation for the conditional distribution.- 10.3 Erlang mixtures with the same scale parameter.- 10.4 General Erlang mixtures.- 10.5 Further results.- 11 Renewal risk processes.- 11.1 General properties of the model.- 11.2 The Coxian-2 case.- 11.3 The sum of two exponentials.- 11.4 Delayed and equilibrium renewal risk processes.- Symbol Index.- Author Index.

Journal ArticleDOI
Fukang Zhu1
TL;DR: In this article, a negative binomial INGARCH model, a generalization of the Poisson INGARM model, is proposed and stationarity conditions are given as well as the autocorrelation function.
Abstract: This article discusses the modelling of integer-valued time series with overdispersion and potential extreme observations. For the problem, a negative binomial INGARCH model, a generalization of the Poisson INGARCH model, is proposed and stationarity conditions are given as well as the autocorrelation function. For estimation, we present three approaches with the focus on the maximum likelihood approach. Some results from numerical studies are presented and indicate that the proposed methodology performs better than the Poisson and double Poisson model-based methods.

Journal ArticleDOI
TL;DR: In this paper, the authors propose new approaches for performing classification and clustering of observations on the basis of sequencing data using a Poisson log linear model, which is an analog of diagonal linear discriminant analysis.
Abstract: In recent years, advances in high throughput sequencing technology have led to a need for specialized methods for the analysis of digital gene expression data. While gene expression data measured on a microarray take on continuous values and can be modeled using the normal distribution, RNA sequencing data involve nonnegative counts and are more appropriately modeled using a discrete count distribution, such as the Poisson or the negative binomial. Consequently, analytic tools that assume a Gaussian distribution (such as classification methods based on linear discriminant analysis and clustering methods that use Euclidean distance) may not perform as well for sequencing data as methods that are based upon a more appropriate distribution. Here, we propose new approaches for performing classification and clustering of observations on the basis of sequencing data. Using a Poisson log linear model, we develop an analog of diagonal linear discriminant analysis that is appropriate for sequencing data. We also propose an approach for clustering sequencing data using a new dissimilarity measure that is based upon the Poisson model. We demonstrate the performances of these approaches in a simulation study, on three publicly available RNA sequencing data sets, and on a publicly available chromatin immunoprecipitation sequencing data set.

Journal ArticleDOI
TL;DR: Young's modulus and Poisson's ratio of a porous polymeric construct (scaffold) quantitatively describe how it supports and transmits external stresses to its surroundings and, in some applications, a construct having a tunable negative Poisson' ratio (an auxetic construct) may be more suitable for supporting the external forces imposed upon it by its environment.
Abstract: Young's modulus and Poisson's ratio of a porous polymeric construct (scaffold) quantitatively describe how it supports and transmits external stresses to its surroundings. While Young's modulus is always non-negative and highly tunable in magnitude, Poisson's ratio can, indeed, take on negative values despite the fact that it is non-negative for virtually every naturally occurring and artificial material. In some applications, a construct having a tunable negative Poisson's ratio (an auxetic construct) may be more suitable for supporting the external forces imposed upon it by its environment. Here, three-dimensional polyethylene glycol scaffolds with tunable negative Poisson's ratios are fabricated. Digital micromirror device projection printing (DMD-PP) is used to print single-layer constructs composed of cellular structures (pores) with special geometries, arrangements, and deformation mechanisms. The presence of the unit-cellular structures tunes the magnitude and polarity (positive or negative) of Poisson's ratio. Multilayer constructs are fabricated with DMD-PP by stacking the single-layer constructs with alternating layers of vertical connecting posts. The Poisson's ratios of the single- and multilayer constructs are determined from strain experiments, which show (1) that the Poisson's ratios of the constructs are accurately predicted by analytical deformation models and (2) that no slipping occurrs between layers in the multilayer constructs and the addition of new layers does not affect Poisson's ratio.

Journal ArticleDOI
TL;DR: The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulae for its reliability and failure rate functions, quantiles and moments, including the mean and variance.

Journal ArticleDOI
TL;DR: In this article, a bivariate integer-valued autoregressive process of order 1 (BINAR(1)) is introduced for counting data and a method of conditional maximum likelihood for the estimation of its unknown parameters is proposed.
Abstract: The study of time series models for count data has become a topic of special interest during the last years. However, while research on univariate time series for counts now flourishes, the literature on multivariate time series models for count data is notably more limited. In the present paper, a bivariate integer-valued autoregressive process of order 1 (BINAR(1)) is introduced. Emphasis is placed on models with bivariate Poisson and bivariate negative binomial innovations. We discuss properties of the BINAR(1) model and propose the method of conditional maximum likelihood for the estimation of its unknown parameters. Issues of diagnostics and forecasting are considered and predictions are produced by means of the conditional forecast distribution. Estimation uncertainty is accommodated by taking advantage of the asymptotic normality of maximum likelihood estimators and constructing appropriate confidence intervals for the fe-step-ahead conditional probability mass function. The proposed model is appli...

Journal ArticleDOI
TL;DR: In this article, the authors consider a class of observation-driven Poisson count processes where the current value of the accompanying intensity process depends on previous values of both processes and show that the bivariate process has a unique stationary distribution and that the stationary version of the count process is absolutely regular.
Abstract: We consider a class of observation-driven Poisson count processes where the current value of the accompanying intensity process depends on previous values of both processes. We show under a contractive condition that the bivariate process has a unique stationary distribution and that the stationary version of the count process is absolutely regular. Moreover, since the intensities can be written as measurable functionals of the count variables we conclude that the bivariate process is ergodic. As an important application of these results, we show how a test method previously used in the case of independent Poisson data can be used in the case of Poisson count processes.

Journal ArticleDOI
TL;DR: In this article, the integrands in the Wiener-Ito chaos expansion were identified explicitly in terms of iterated difference operators for a Poisson process on an arbitrary measurable space with sigma-finite intensity measure.
Abstract: We consider a Poisson process η on an arbitrary measurable space with an arbitrary sigma-finite intensity measure. We establish an explicit Fock space representation of square integrable functions of η. As a consequence we identify explicitly, in terms of iterated difference operators, the integrands in the Wiener–Ito chaos expansion. We apply these results to extend well-known variance inequalities for homogeneous Poisson processes on the line to the general Poisson case. The Poincare inequality is a special case. Further applications are covariance identities for Poisson processes on (strictly) ordered spaces and Harris–FKG-inequalities for monotone functions of η.

Posted ContentDOI
TL;DR: In this paper, a modification of the horizontal dividend barrier strategy by introducing random observation times at which dividends can be paid and ruin can be observed was studied, and the effect of these observation times on the performance of the dividend strategy was studied.
Abstract: In the framework of the classical compound Poisson process in collective risk theory, we study a modification of the horizontal dividend barrier strategy by introducing random observation times at which dividends can be paid and ruin can be observed. This model contains both the continuous-time and the discrete-time risk model as a limit and represents a certain type of bridge between them which still enables the explicit calculation of moments of total discounted dividend payments until ruin. Numerical illustrations for several sets of parameters are given and the effect of random observation times on the performance of the dividend strategy is studied.

Journal ArticleDOI
TL;DR: In this paper, the authors presented two models for estimating the probabilities of future earth quakes in California, to be tested in the Collaboratory for the Study of Earthquake Predictability (CSEP).
Abstract: We present two models for estimating the probabilities of future earth- quakes in California, to be tested in the Collaboratory for the Study of Earthquake Predictability (CSEP). The first is a time-independent model of adaptively smoothed seismicity that we modified from Helmstetter et al. (2007). The model provides five- year forecasts for earthquakes with magnitudes M ≥ 4:95. We show that large earthquakes tend to occur near the locations of small M ≥ 2 events, so that a high- resolution estimate of the spatial distribution of future large quakes is obtained from the locations of the numerous small events. We further assume a universal Gutenberg- Richter magnitude distribution. In retrospective tests, we show that a Poisson distri- bution does not fit the observed rate variability, in contrast to assumptions in current earthquake predictability experiments. We therefore issued forecasts using a better- fitting negative binomial distribution for the number of events. The second model is a time-dependent epidemic-type aftershock sequence (ETAS) model that we modified from Helmstetter et al. (2006) and that provides next-day forecasts for M ≥ 3:95. In this model, the forecasted rate is the sum of a background rate (propor- tional to the time-independent model rate) and of the expected rate of triggered events due to all prior earthquakes. Each earthquake triggers events with a rate that increases exponentially with its magnitude and decays in time according to the Omori-Utsu law. An isotropic kernel models the spatial density of aftershocks for small (M ≤ 5:5) events, while for larger quakes, we smooth early aftershocks to forecast later events. We estimate parameter values by optimizing retrospective forecasts and find that the short-term model realizes a probability gain of about 6.0 per earthquake over the time-independent model. Online Material: Identification of explosions and ETAS parameters.

Journal ArticleDOI
TL;DR: The free Bessel law as discussed by the authors is related to the free Poisson law via the formulae and the Bessel norm. But it is not related to any real probability measure.
Abstract: We introduce and study a remarkable family of real probability measures that we call free Bessel laws. These are related to the free Poisson law via the formulae and . Our study includes definition and basic properties, analytic aspects (supports, atoms, densities), combinatorial aspects (functional transforms, moments, partitions), and a discussion of the relation with random matrices and quantum groups.

Journal ArticleDOI
TL;DR: The standard statistical method for analyzing count data is the Poisson regression model, which is usually estimated using maximum likelihood (ML) method as discussed by the authors, which is very sensitive to multico...

Journal ArticleDOI
TL;DR: Central limit theorems for $U$-statistics of Poisson point processes are shown, with explicit bounds for the Wasserstein distance to a Gaussian random variable and the length of a random geometric graph are investigated.
Abstract: A $U$-statistic of a Poisson point process is defined as the sum $\sum f(x_1,\ldots,x_k)$ over all (possibly infinitely many) $k$-tuples of distinct points of the point process. Using the Malliavin calculus, the Wiener-Ito chaos expansion of such a functional is computed and used to derive a formula for the variance. Central limit theorems for $U$-statistics of Poisson point processes are shown, with explicit bounds for the Wasserstein distance to a Gaussian random variable. As applications, the intersection process of Poisson hyperplanes and the length of a random geometric graph are investigated.

Journal ArticleDOI
TL;DR: Simulation results show that the appropriate selection of the Primary User Activity Index, higher primary-user detection accuracy, reduced false-alarm probabilities, and higher throughput can be achieved by the proposed model.
Abstract: In many recent studies on cognitive radio (CR) networks, the primary user activity is assumed to follow the Poisson traffic model with exponentially distributed interarrivals. The Poisson modeling may lead to cases where primary user activities are modeled as smooth and burst-free traffic. As a result, this may cause the cognitive radio users to miss some available but unutilized spectrum, leading to lower throughput and high false-alarm probabilities. The main contribution of this paper is to propose a novel model to parametrize the primary user traffic in a more efficient and accurate way in order to overcome the drawbacks of the Poisson modeling. The proposed model makes this possible by arranging the first-difference filtered and correlated primary user data into clusters. In this paper, a new metric called the Primary User Activity Index, Φ, is introduced, which accounts for the relation between the cluster filter output and correlation statistics. The performance of the proposed model is evaluated by means of traffic estimation accuracy, false-alarm probabilities while keeping the detection probability of primary users at a constant value. Simulation results show that the appropriate selection of the Primary User Activity Index, higher primary-user detection accuracy, reduced false-alarm probabilities, and higher throughput can be achieved by the proposed model.

Book
09 Sep 2011
TL;DR: A stochastic model of make-to-stock firms based on a buffer flow system with jumps based on two Poisson counting processes with random intensities parameterized by production capacity and price is presented.
Abstract: We present a stochastic model of make-to-stock firms based on a buffer flow system with jumps. The cumulative production and the cumulative demand are governed by two Poisson counting processes with random intensities parameterized by production capacity and price respectively. Optimal operating and pricing policies short-run decisions and optimal capacity long-run decisions are explored by application of a two-stage optimization device. Detailed computations regarding the Poisson buffer flow system and a variation on the basic model with learning effects are also presented.

Journal ArticleDOI
TL;DR: A modification of generalized estimating equations (GEEs) methodology is proposed for hypothesis testing of high-dimensional data, with particular interest in multivariate abundance data in ecology, and it is shown via theory and simulation that this substantially improves the power of Wald statistics when cluster size is not small.
Abstract: Summary A modification of generalized estimating equations (GEEs) methodology is proposed for hypothesis testing of high-dimensional data, with particular interest in multivariate abundance data in ecology, an important application of interest in thousands of environmental science studies. Such data are typically counts characterized by high dimensionality (in the sense that cluster size exceeds number of clusters, n>K) and over-dispersion relative to the Poisson distribution. Usual GEE methods cannot be applied in this setting primarily because sandwich estimators become numerically unstable as n increases. We propose instead using a regularized sandwich estimator that assumes a common correlation matrix R, and shrinks the sample estimate of R toward the working correlation matrix to improve its numerical stability. It is shown via theory and simulation that this substantially improves the power of Wald statistics when cluster size is not small. We apply the proposed approach to study the effects of nutrient addition on nematode communities, and in doing so discuss important issues in implementation, such as using statistics that have good properties when parameter estimates approach the boundary (), and using resampling to enable valid inference that is robust to high dimensionality and to possible model misspecification.

Journal ArticleDOI
TL;DR: Focusing on the rounded Gaussian case, this work generalizes the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications, and proposes broad class of alternative models, nonparametric mixtures of rounded continuous kernels.
Abstract: Although Bayesian nonparametric mixture models for continuous data are well developed, the literature on related approaches for count data is limited. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions with variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow for smooth deviations from the Poisson. We propose broad class of alternative models, nonparametric mixtures of rounded continuous kernels. We develop an efficient Gibbs sampler for posterior computation, and perform a simulation study to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. We illustrate our methods through applications to a developmental toxicity study and m...

Journal ArticleDOI
TL;DR: The multiaxial PML (M-PML) as mentioned in this paper attenuates the waves in PMLs using different damping profiles that are proportional to each other in orthogonal directions.
Abstract: Perfectly matched layer (PML) absorbing boundaries are widely used to suppress spurious edge reflections in seismic modeling. When modeling Rayleigh waves with the existence of the free surface, the classical PML algorithm becomes unstable when the Poisson’s ratio of the medium is high. Numerical errors can accumulate exponentially and terminate the simulation due to computational overflows. Numerical tests show that the divergence speed of the classical PML has a nonlinear relationship with the Poisson’s ratio. Generally, the higher the Poisson’s ratio, the faster the classical PML diverges. The multiaxial PML (M-PML) attenuates the waves in PMLs using different damping profiles that are proportional to each other in orthogonal directions. The proportion coefficients of the damping profiles usually vary with the specific model settings. If they are set appropriately, the M-PML algorithm is stable for high Poisson’s ratio earth models. Through numerical tests of 40 models with Poisson’s ratios that varied from 0.10 to 0.49, we found that a constant proportion coefficient of 1.0 for the x- and z-directional damping profiles is sufficient to stabilize the M-PML for all 2D isotropic elastic cases. Wavefield simulations indicate that the instability of the classical PML is strongly related to the wave phenomena near the free surface. When applying the multiaxial technique only in the corners of the PML near the free surface, the original M-PML technique can be simplified without losing its stability. The simplified M-PML works efficiently for homogeneous and heterogeneous earth models with high Poisson’s ratios. The analysis in this paper is based on 2D finite difference modeling in the time domain that can easily be extended into the 3D domain with other numerical methods.

Journal ArticleDOI
TL;DR: The proposed approximation produces results equivalent to those obtained with the accurate (nonanalytical) exact unbiased inverse, and thus, notably better than one would get with the asymptotically unbiased inverse transformation that is commonly used in applications.
Abstract: We presented an exact unbiased inverse of the Anscombe variance-stabilizing transformation in [M. Makitalo and A. Foi, “Optimal inversion of the Anscombe transformation in low-count Poisson image denoising,” IEEE Trans. Image Process., vol. 20, no. 1, pp. 99-109, Jan. 2011.] and showed that when applied to Poisson image denoising, the combination of variance stabilization and state-of-the-art Gaussian denoising algorithms is competitive with some of the best Poisson denoising algorithms. We also provided a MATLAB implementation of our method, where the exact unbiased inverse transformation appears in nonanalytical form. Here, we propose a closed-form approximation of the exact unbiased inverse in order to facilitate the use of this inverse. The proposed approximation produces results equivalent to those obtained with the accurate (nonanalytical) exact unbiased inverse, and thus, notably better than one would get with the asymptotically unbiased inverse transformation that is commonly used in applications.