scispace - formally typeset
Search or ask a question

Showing papers on "Poisson distribution published in 2005"


Journal ArticleDOI
TL;DR: In this article, it is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials, and that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process.

749 citations


Journal ArticleDOI
TL;DR: The density function of the distance to the n-nearest neighbor of a homogeneous process in Ropfm is shown to be governed by a generalized Gamma distribution, which has many implications for large wireless networks of randomly distributed nodes.
Abstract: The distribution of Euclidean distances in Poisson point processes is determined. The main result is the density function of the distance to the n-nearest neighbor of a homogeneous process in Ropfm, which is shown to be governed by a generalized Gamma distribution. The result has many implications for large wireless networks of randomly distributed nodes

662 citations


Journal ArticleDOI
TL;DR: In this article, random effect models for repeated measurements of zero-inflated count responses are discussed. But, the problem of extra zeros, the correlation between measurements upon the same subject at different occasions needs to be taken into account.
Abstract: For count responses, the situation of excess zeros (relative to what standard models allow) often occurs in biomedical and sociological applications. Modeling repeated measures of zero-inflated count data presents special challenges. This is because in addition to the problem of extra zeros, the correlation between measurements upon the same subject at different occasions needs to be taken into account. This article discusses random effect models for repeated measurements on this type of response variable. A useful model is the hurdle model with random effects, which separately handles the zero observations and the positive counts. In maximum likelihood model fitting, we consider both a normal distribution and a nonparametric approach for the random effects. A special case of the hurdle model can be used to test for zero inflation. Random effects can also be introduced in a zero-inflated Poisson or negative binomial model, but such a model may encounter fitting problems if there is zero deflation at any s...

330 citations


Journal ArticleDOI
K. Gilholm1, D. Salmond1
17 Oct 2005
TL;DR: In this paper, a Bayesian filter was developed for tracking an extended object in clutter based on two simple axioms: (i) the number of received target and clutter measurements in a frame are Poisson distributed (so several measurements may originate from the target) and (ii) target extent is modelled by a spatial probability distribution and each targetrelated measurement is an independent 'random draw' from this spatial distribution (convolved with a sensor model).
Abstract: A Bayesian filter has been developed for tracking an extended object in clutter based on two simple axioms: (i) the numbers of received target and clutter measurements in a frame are Poisson distributed (so several measurements may originate from the target) and (ii) target extent is modelled by a spatial probability distribution and each target-related measurement is an independent 'random draw' from this spatial distribution (convolved with a sensor model). Diffuse spatial models of target extent are of particular interest. This model is especially suitable for a particle filter implementation, and examples are presented for a Gaussian mixture model and for a uniform stick target convolved with a Gaussian error. A rather restrictive special case that admits a solution in the form of a multiple hypothesis Kalman filter is also discussed and demonstrated.

303 citations


Journal ArticleDOI
TL;DR: Evaluating binomial mixture models using data from the national breeding bird monitoring program in Switzerland, where some 250 1-km2 quadrats are surveyed using the territory mapping method three times during each breeding season finds eight species with contrasting distribution, abundance, and detectability.
Abstract: Abundance estimation in ecology is usually accomplished by capture–recapture, removal, or distance sampling methods. These may be hard to implement at large spatial scales. In contrast, binomial mixture models enable abundance estimation without individual identification, based simply on temporally and spatially replicated counts. Here, we evaluate mixture models using data from the national breeding bird monitoring program in Switzerland, where some 250 1-km2 quadrats are surveyed using the territory mapping method three times during each breeding season. We chose eight species with contrasting distribution (wide–narrow), abundance (high–low), and detectability (easy–difficult). Abundance was modeled as a random effect with a Poisson or negative binomial distribution, with mean affected by forest cover, elevation, and route length. Detectability was a logit-linear function of survey date, survey date-by-elevation, and sampling effort (time per transect unit). Resulting covariate effects and parameter est...

294 citations


Proceedings ArticleDOI
TL;DR: In this paper, the measurements are modelled as a Poisson process with a spatially dependent rate parameter, which allows to model extended targets as an intensity distribution rather than a set of points and, for a target formation, it gives the option of modelling part of the group as a spatial distribution of target density.
Abstract: It is common practice to represent a target group (or an extended target) as set of point sources and attempt to formulate a tracking filter by constructing possible assignments between measurements and the sources. We suggest an alternative approach that produces a measurement model (likelihood) in terms of the spatial density of measurements over the sensor observation region. In particular, the measurements are modelled as a Poisson process with a spatially dependent rate parameter. This representation allows us to model extended targets as an intensity distribution rather than a set of points and, for a target formation, it gives the option of modelling part of the group as a spatial distribution of target density. Furthermore, as a direct consequence of the Poisson model, the measurement likelihood may be evaluated without constructing explicit association hypotheses. This considerably simplifies the filter and gives a substantial computational saving in a particle filter implementation. The Poisson target-measurement model will be described and its relationship to other filters will be discussed. Illustrative simulation examples will be presented.

292 citations


Book
16 Sep 2005
TL;DR: In this paper, the linearization of Poisson structures is discussed.Generalities on Poisson Structures and Poisson Cohomology, Levi Decomposition, Linearization, Nambu Structures, Singular Foliations, Lie Groupoids, Lie Algebroids.
Abstract: Generalities on Poisson Structures.- Poisson Cohomology.- Levi Decomposition.- Linearization of Poisson Structures.- Multiplicative and Quadratic Poisson Structures.- Nambu Structures and Singular Foliations.- Lie Groupoids.- Lie Algebroids.

279 citations


Journal ArticleDOI
TL;DR: Simulations using prototype reaction networks show that the BD-tau method is more accurate than the original method for comparable coarse-graining in time, and thus conserve mass.
Abstract: Recently, Gillespie introduced the τ-leap approximate, accelerated stochastic Monte Carlo method for well-mixed reacting systems [J. Chem. Phys. 115, 1716 (2001)]. In each time increment of that method, one executes a number of reaction events, selected randomly from a Poisson distribution, to enable simulation of long times. Here we introduce a binomial distribution τ-leap algorithm (abbreviated as BD-τ method). This method combines the bounded nature of the binomial distribution variable with the limiting reactant and constrained firing concepts to avoid negative populations encountered in the original τ-leap method of Gillespie for large time increments, and thus conserve mass. Simulations using prototype reaction networks show that the BD-τ method is more accurate than the original method for comparable coarse-graining in time.

252 citations


Journal ArticleDOI
TL;DR: In this article, a modified Poisson tau-leaping procedure was proposed to avoid negative populations, which is easier to implement than the binomial procedure. But it does not guarantee to be accurate.
Abstract: The explicit tau-leaping procedure attempts to speed up the stochastic simulation of a chemically reacting system by approximating the number of firings of each reaction channel during a chosen time increment tau as a Poisson random variable. Since the Poisson random variable can have arbitrarily large sample values, there is always the possibility that this procedure will cause one or more reaction channels to fire so many times during tau that the population of some reactant species will be driven negative. Two recent papers have shown how that unacceptable occurrence can be avoided by replacing the Poisson random variables with binomial random variables, whose values are naturally bounded. This paper describes a modified Poisson tau-leaping procedure that also avoids negative populations, but is easier to implement than the binomial procedure. The new Poisson procedure also introduces a second control parameter, whose value essentially dials the procedure from the original Poisson tau-leaping at one extreme to the exact stochastic simulation algorithm at the other; therefore, the modified Poisson procedure will generally be more accurate than the original Poisson procedure.

241 citations


Journal ArticleDOI
TL;DR: The capabilities of the free software package BayesX for estimating regression models with structured additive predictor based on MCMC inference are described, which extends the capabilities of existing software for semiparametric regression included in S-PLUS, SAS, R or Stata.
Abstract: There has been much recent interest in Bayesian inference for generalized additive and related models. The increasing popularity of Bayesian methods for these and other model classes is mainly caused by the introduction of Markov chain Monte Carlo (MCMC) simulation techniques which allow realistic modeling of complex problems. This paper describes the capabilities of the free software package BayesX for estimating regression models with structured additive predictor based on MCMC inference. The program extends the capabilities of existing software for semiparametric regression included in S-PLUS, SAS, R or Stata. Many model classes well known from the literature are special cases of the models supported by BayesX. Examples are generalized additive (mixed) models, dynamic models, varying coefficient models, geoadditive models, geographically weighted regression and models for space-time regression. BayesX supports the most common distributions for the response variable. For univariate responses these are Gaussian, Binomial, Poisson, Gamma, negative Binomial, zero inflated Poisson and zero inflated negative binomial. For multicategorical responses, both multinomial logit and probit models for unordered categories of the response as well as cumulative threshold models for ordered categories can be estimated. Moreover, BayesX allows the estimation of complex continuous time survival and hazard rate models.

241 citations


01 Aug 2005
TL;DR: A modified Poisson tau-leaping procedure is described that also avoids negative populations, but is easier to implement than the binomial procedure.
Abstract: The explicit tau-leaping procedure attempts to speed up the stochastic simulation of a chemically reacting system by approximating the number of firings of each reaction channel during a chosen time increment tau as a Poisson random variable. Since the Poisson random variable can have arbitrarily large sample values, there is always the possibility that this procedure will cause one or more reaction channels to fire so many times during tau that the population of some reactant species will be driven negative. Two recent papers have shown how that unacceptable occurrence can be avoided by replacing the Poisson random variables with binomial random variables, whose values are naturally bounded. This paper describes a modified Poisson tau-leaping procedure that also avoids negative populations, but is easier to implement than the binomial procedure. The new Poisson procedure also introduces a second control parameter, whose value essentially dials the procedure from the original Poisson tau-leaping at one extreme to the exact stochastic simulation algorithm at the other; therefore, the modified Poisson procedure will generally be more accurate than the original Poisson procedure. (C) 2005 American Institute of Physics.

Journal ArticleDOI
TL;DR: The stability of Poisson JC*-algebra homomorphisms was shown in this article, where it was shown that the Cauchy-Rassias stability of poisson JC*, algebras can be maintained.
Abstract: It is shown that every almost linear mapping $$ h:{\user1{\mathcal{A}}} \to {\user1{\mathcal{B}}} $$ of a unital Poisson JC*-algebra $$ {\user1{\mathcal{A}}} $$ to a unital Poisson JC*-algebra $$ {\user1{\mathcal{B}}} $$ is a Poisson JC*-algebra homomorphism when h(2 n uоy) = h(2 n u) о h(y), h(3 n uо y) = h(3 n u) о h(y) or h(q n u о y) = h(q n u) о h(y) for all $$ y \in {\user1{\mathcal{A}}} $$ , all unitary elements $$ u \in {\user1{\mathcal{A}}} $$ and n = 0, 1, 2, · · · , and that every almost linear almost multiplicative mapping $$ h:{\user1{\mathcal{A}}} \to {\user1{\mathcal{B}}} $$ is a Poisson JC*-algebra homomorphism when h(2x) = 2h(x), h(3x) = 3h(x) or h(qx) = qh(x) for all $$ x \in {\user1{\mathcal{A}}} $$ . Here the numbers 2, 3, q depend on the functional equations given in the almost linear mappings or in the almost linear almost multiplicative mappings. Moreover, we prove the Cauchy–Rassias stability of Poisson JC*-algebra homomorphisms in Poisson JC*-algebras.

Book
26 Apr 2005
TL;DR: In this paper, Chen and Shao presented three general approaches to Stein's method for Poisson and Compound Poisson Approximation (G Reinert, T Erhardsson, and Xia).
Abstract: Stein's Method for Normal Approximation (L Chen & Q-M Shao) Stein's Method for Poisson and Compound Poisson Approximation (T Erhardsson) Stein's Method and Poisson Process Approximation (A Xia) Three General Approaches to Stein's Method (G Reinert).

Journal ArticleDOI
TL;DR: In this article, it was shown that Poisson's ratio for anisotropic elastic materials can have an arbitrarily large positive or negative value under the prerequisite of positive definiteness of strain energy density.
Abstract: Poisson's ratio for isotropic elastic materials is bounded between -1 and 1 / 2 . It is shown that Poisson's ratio for anisotropic elastic materials can have an arbitrarily large positive or negative value under the prerequisite of positive definiteness of strain energy density. The large Poisson's ratio for cubic materials is physically realistic because the strains are bounded.

Journal ArticleDOI
TL;DR: It is proved that the generalized Poisson distribution GP(theta, eta) (eta > or = 0) is a mixture of Poisson distributions; this is a new property for a distribution which is the topic of the book by Consul (1989).
Abstract: We prove that the generalized Poisson distribution GP(theta, eta) (eta > or = 0) is a mixture of Poisson distributions; this is a new property for a distribution which is the topic of the book by Consul (1989). Because we find that the fits to count data of the generalized Poisson and negative binomial distributions are often similar, to understand their differences, we compare the probability mass functions and skewnesses of the generalized Poisson and negative binomial distributions with the first two moments fixed. They have slight differences in many situations, but their zero-inflated distributions, with masses at zero, means and variances fixed, can differ more. These probabilistic comparisons are helpful in selecting a better fitting distribution for modelling count data with long right tails. Through a real example of count data with large zero fraction, we illustrate how the generalized Poisson and negative binomial distributions as well as their zero-inflated distributions can be discriminated.

Journal ArticleDOI
TL;DR: In this article, the authors obtained a 5 ksec deep Chandra X-ray Observatory ACIS-I map of the 9.3 square degree Bootes field of the NOAO Deep Wide-Field Survey.
Abstract: We obtained a 5 ksec deep Chandra X-ray Observatory ACIS-I map of the 9.3 square degree Bootes field of the NOAO Deep Wide-Field Survey. Here we describe the data acquisition and analysis strategies leading to a catalog of 4642 (3293) point sources with 2 or more (4 or more) counts, corresponding to a limiting flux of roughly 4(8)x10^{-15} erg cm^{-2}s^{-1} in the 0.5-7 keV band. These Chandra XBootes data are unique in that they consitute the widest contiguous X-ray field yet observed to such a faint flux limit. Because of the extraordinarily low background of the ACIS, we expect only 14% (0.7%) of the sources to be spurious. We also detected 43 extended sources in this survey. The distribution of the point sources among the 126 pointings (ACIS-I has a 16 x 16 arcminute field of view) is consistent with Poisson fluctuations about the mean of 36.8 sources per pointing. While a smoothed image of the point source distribution is clumpy, there is no statistically significant evidence of large scale filamentary structure. We do find however, that for theta>1 arcminute, the angular correlation function of these sources is consistent with previous measurements, following a power law in angle with slope -0.7. In a 1.4 deg^{2} sample of the survey, approximately 87% of the sources with 4 or more counts have an optical counterpart to R ~26 mag. As part of a larger program of optical spectroscopy of the NDWFS Bootes area, spectra have been obtained for \~900 of the X-ray sources, most of which are QSOs or AGN.

Journal ArticleDOI
TL;DR: In this article, a Poisson log-bilinear projection model is applied to the forecasting of the gender- and age-specific mortality rates for Belgium on the basis of mortality statistics relating to the period 1950-2000.
Abstract: This paper proposes bootstrap procedures for expected ramining lifetimes and life annuity single premiums in a dynamic mortality environment. Assuming a further continuation of the stable pace of mortality decline, a Poisson log-bilinear projection model is applied to the forecasting of the gender- and age-specific mortality rates for Belgium on the basis of mortality statistics relating to the period 1950-2000. Bootstrap procedures are then used to obtain confidence intervals on various actuarial quantities.

Journal ArticleDOI
TL;DR: In this paper, a theory of quantile regression in the tails was developed, which obtains the large sample properties of extremal (extreme order and intermediate order) quantile estimators for the linear quantile regressions with the tails restricted to the domain of minimum attraction.
Abstract: Quantile regression is an important tool for estimation of conditional quantiles of a response Y given a vector of covariates X. It can be used to measure the effect of covariates not only in the center of a distribution, but also in the upper and lower tails. This paper develops a theory of quantile regression in the tails. Specifically, it obtains the large sample properties of extremal (extreme order and intermediate order) quantile regression estimators for the linear quantile regression model with the tails restricted to the domain of minimum attraction and closed under tail equivalence across regressor values. This modeling setup combines restrictions of extreme value theory with leading homoscedastic and heteroscedastic linear specifications of regression analysis. In large samples, extreme order regression quantiles converge weakly to \argmin functionals of stochastic integrals of Poisson processes that depend on regressors, while intermediate regression quantiles and their functionals converge to normal vectors with variance matrices dependent on the tail parameters and the regressor design.

Journal ArticleDOI
TL;DR: In this paper, several parametric zero-inflated count distributions, including the ZIP, ZINB, ZIGP and ZIDP, were presented to accommodate the excess zeros for insurance claim count data.
Abstract: In some occasions, claim frequency data in general insurance may not follow the traditional Poisson distribution and in particular they are zero-inflated. Extra dispersion appears as the number of observed zeros exceeding the number of expected zeros under the Poisson or even the negative binomial distribution assumptions. This paper presents several parametric zero-inflated count distributions, including the ZIP, ZINB, ZIGP and ZIDP, to accommodate the excess zeros for insurance claim count data. Different count distributions in the second component are considered to allow flexibility to control the distribution shape. The generalized Pearson χ2 statistic, Akaike's information criteria (AIC) and Bayesian information criteria (BIC) are used as goodness-of-fit and model selection measures. With the presence of extra zeros in a data set of automobile insurance claims, our result shows that the application of zero-inflated count data models and in particular the zero-inflated double Poisson regression model, provide a good fit to the data.

Journal ArticleDOI
TL;DR: The purpose of this article is to compare and contrast the use of these three methods for the analysis of infrequently occurring count data, and the strengths, limitations, and special considerations of each approach are discussed.
Abstract: Nurses and other health researchers are often concerned with infrequently occurring, repeatable, health-related events such as number of hospitalizations, pregnancies, or visits to a health care provider. Reports on the occurrence of such discrete events take the form of non-negative integer or count data. Because the counts of infrequently occurring events tend to be non-normally distributed and highly positively skewed, the use of ordinary least squares (OLS) regression with non-transformed data has several shortcomings. Techniques such as Poisson regression and negative binomial regression may provide more appropriate alternatives for analyzing these data. The purpose of this article is to compare and contrast the use of these three methods for the analysis of infrequently occurring count data. The strengths, limitations, and special considerations of each approach are discussed. Data from the National Longitudinal Survey of Adolescent Health (AddHealth) are used for illustrative purposes.

Journal ArticleDOI
Chia-Chin Chong1, Su Khiong Yong1
TL;DR: A generic statistical-based ultrawide-band (UWB) indoor channel model which incorporates the clustering of multipath components (MPCs) and a new distribution, namely, a mixture of two Poisson processes, is proposed to model the ray arrival times.
Abstract: A generic statistical-based ultrawide-band (UWB) indoor channel model which incorporates the clustering of multipath components (MPCs) is proposed. The model is derived using measurement data collected in the frequency band of 3-10 GHz in various types of high-rise apartment under different propagation scenarios. The measurement procedure allows the characterization of both the large-scale and the small-scale statistics of the channel. The main objective is to study multipath propagation behavior, particularly the phenomenon of clustered MPCs. The description of clustering observed in the channel uses two classes of parameters which characterize the clustering and the MPCs respectively. All parameters are described by a set of empirical probability density functions derived from the measured data such as the distribution of clusters and MPCs, cluster and MPC arrival statistics and small-scale amplitude fading statistics. A new distribution, namely, a mixture of two Poisson processes, is proposed to model the ray arrival times. This new distribution fits the empirical data better than the single Poisson process proposed in the conventional Saleh-Valenzuela (S-V) model. Analysis results show that the small-scale amplitude fading statistics are best modeled by the Weibull distribution. The Weibull b-shape parameter is lognormally distributed and is found to be invariant across the excess delay. Additionally, the temporal correlation between adjacent path amplitudes is investigated. The amplitude temporal correlation coefficients are found to be relatively small, and thus, can be assumed to be negligible in reality. The proposed model can provide a realistic simulation platform for UWB communication systems.

Journal ArticleDOI
TL;DR: The approach presented in this paper enables researchers to incorporate the pattern of spatial dependence of mortality rates into the mapping of risk values and the quantification of the associated uncertainty, while being easier to implement than a full Bayesian model.
Abstract: Background Cancer mortality maps are used by public health officials to identify areas of excess and to guide surveillance and control activities Quality of decision-making thus relies on an accurate quantification of risks from observed rates which can be very unreliable when computed from sparsely populated geographical units or recorded for minority populations This paper presents a geostatistical methodology that accounts for spatially varying population sizes and spatial patterns in the processing of cancer mortality data Simulation studies are conducted to compare the performances of Poisson kriging to a few simple smoothers (ie population-weighted estimators and empirical Bayes smoothers) under different scenarios for the disease frequency, the population size, and the spatial pattern of risk A public-domain executable with example datasets is provided

Journal ArticleDOI
TL;DR: In this article, the impact of the choice of two alternative prior distributions (i.e., gamma versus lognormal) and the effect of allowing variability in the dispersion parameter on the outcome of the analysis was investigated.
Abstract: Many types of statistical models have been proposed for estimating accident risk in transport networks, ranging from basic Poisson and negative binomial models to more complicated models, such as zero-inflated and hierarchical Bayesian models. However, little systematic effort has been devoted to comparing the performance and practical implications of these models and ranking criteria when they are used for identifying hazardous locations. This research investigates the relative performance of three alternative models: the traditional negative binomial model, the heterogeneous negative binomial model, and the Poisson lognormal model. In particular, this work focuses on the impact of the choice of two alternative prior distributions (i.e., gamma versus lognormal) and the effect of allowing variability in the dispersion parameter on the outcome of the analysis. From each model, two alternative accident estimators are computed by using the conditional mean under both marginal and posterior distributions. A sample of Canadian highway-railway intersections with an accident history of 5 years is used to calibrate and evaluate the three alternative models and the two ranking criteria. It is concluded that the choice of model assumptions and ranking criteria can lead to considerably different lists of hazardous locations.

Journal ArticleDOI
TL;DR: A general underlying Poisson variable framework for mixed discrete outcomes, accommodating dependency through an additive gamma frailty model for the Poisson means is proposed.
Abstract: SUMMARY In studies of complex health conditions, mixtures of discrete outcomes (event time, count, binary, ordered categorical) are commonly collected. For example, studies of skin tumorigenesis record latency time prior to the first tumor, increases in the number of tumors at each week, and the occurrence of internal tumors at the time of death. Motivated by this application, we propose a general underlying Poisson variable framework for mixed discrete outcomes, accommodating dependency through an additive gamma frailty model for the Poisson means. The model has log-linear, complementary log-log, and proportional hazards forms for count, binary and discrete event time outcomes, respectively. Simple closed form expressions can be derived for the marginal expectations, variances, and correlations. Following a Bayesian approach to inference, conditionally-conjugate prior distributions are chosen that facilitate posterior computation via an MCMC algorithm. The methods are illustrated using data from a Tg.AC mouse bioassay study.

Journal ArticleDOI
TL;DR: It is proved that an $(s, S)$ policy is optimal in a continuous-review stochastic inventory model with a fixed ordering cost when the demand is a mixture of a diffusion process and a compound Poisson process with exponentially distributed jump sizes.
Abstract: We prove that an $(s, S)$ policy is optimal in a continuous-review stochastic inventory model with a fixed ordering cost when the demand is a mixture of (i) a diffusion process and a compound Poisson process with exponentially distributed jump sizes, and (ii) a constant demand and a compound Poisson process. The proof uses the theory of impulse control. The Bellman equation of dynamic programming for such a problem reduces to a set of quasi-variational inequalities (QVI). An analytical study of the QVI leads to showing the existence of an optimal policy as well as the optimality of an $(s, S)$ policy. Finally, the combination of a diffusion and a general compound Poisson demand is not completely solved. We explain the difficulties and what remains open. We also provide a numerical example for the general case.

Journal ArticleDOI
TL;DR: In this article, an R package called bivpois is presented for maximum likelihood estimation of the parameters of bivariate and diagonal inflated bivariate Poisson regression models, and an Expectation-Maximization (EM) algorithm is implemented.
Abstract: In this paper we present an R package called bivpois for maximum likelihood estimation of the parameters of bivariate and diagonal inflated bivariate Poisson regression models. An Expectation-Maximization (EM) algorithm is implemented. Inflated models allow for modelling both over-dispersion (or under-dispersion) and negative correlation and thus they are appropriate for a wide range of applications. Extensions of the algorithms for several other models are also discussed. Detailed guidance and implementation on simulated and real data sets using bivpois package is provided.

Journal ArticleDOI
TL;DR: In this paper, a unified posterior analysis of classes of discrete random probability which identifies and exploits features common to all these models is presented, which circumvents many of the difficult issues involved in Bayesian nonparametric calculus, including a combinatorial component.
Abstract: This article develops, and describes how to use, results concerning disintegrations of Poisson random measures. These results are fashioned as simple tools that can be tailor-made to address inferential questions arising in a wide range of Bayesian nonparametric and spatial statistical models. The Poisson disintegration method is based on the formal statement of two results concerning a Laplace functional change of measure and a Poisson Palm/Fubini calculus in terms of random partitions of the integers (1,...,n). The techniques are analogous to, but much more general than, techniques for the Dirichlet process and weighted gamma process developed in {Ann. Statist. 12 (1984) 351-3571 and [Ann. Inst. Statist. Math. 41 (1989) 227-245]. In order to illustrate the flexibility of the approach, large classes of random probability measures and random hazards or intensities which can be expressed as functionals of Poisson random measures are described. We describe a unified posterior analysis of classes of discrete random probability which identifies and exploits features common to all these models. The analysis circumvents many of the difficult issues involved in Bayesian nonparametric calculus, including a combinatorial component. This allows one to focus on the unique features of each process which are characterized via real valued functions h. The applicability of the technique is further illustrated by obtaining explicit posterior expressions for Levy-Cox moving average processes within the general setting of multiplicative intensity models. In addition, novel computational procedures, similar to efficient procedures developed for the Dirichlet process, are briefly discussed for these models.

Journal ArticleDOI
TL;DR: In this article, it was shown that any quasi-stationary state for the independent dynamics, with an exponentially bounded integrated density of particles, corresponds to a superposition of Poisson processes with densities p(dx) = e -sx s dx with s > 0, restricted to the relevant σ-algebra.
Abstract: We study systems of particles on a line which have a maximum, are locally finite and evolve with independent increments. Quasi-stationary states are defined as probability measures, on the σ-algebra generated by the gap variables, for which joint distribution of gaps between particles is invariant under the time evolution. Examples are provided by Poisson processes with densities of the form p(d x ) = e -sx s dx, with s > 0, and linear superpositions of such measures. We show that, conversely, any quasi-stationary state for the independent dynamics, with an exponentially bounded integrated density of particles, corresponds to a superposition of Poisson processes with densities p(dx) = e -sx s dx with s > 0, restricted to the relevant σ-algebra. Among the systems for which this question is of some relevance are spin-glass models of statistical mechanics, where the point process represents the collection of the free energies of distinct pure states, the time evolution corresponds to the addition of a spin variable and the Poisson measures described above correspond to the so-called REM states.

Journal ArticleDOI
TL;DR: In this paper, conditions under which parametric estimates of the intensity of a spatial-temporal point process are consistent are considered, and the conditions for consistent estimation are verified and examples to illustrate the extent to which consistent estimation may be achieved.

Journal ArticleDOI
TL;DR: A quasi-likelihood method of moments technique in which the Bernoulli outcome is Poisson, with the mean (success probability) following a log-linear model, is proposed, which uses the Poisson maximum likelihood equations to estimate the regression coefficients without constraints.
Abstract: SUMMARY Fo rap rospective randomized clinical trial with two groups, the relative risk can be used as a measure of treatment effect and is directly interpretable as the ratio of success probabilities in the new treatment group versus the placebo group. For a prospective study with many covariates and a binary outcome (success or failure), relative risk regression may be of interest. If we model the log of the success probability as a linear function of covariates, the regression coefficients are log-relative risks. However, using such a log–linear model with a Bernoulli likelihood can lead to convergence problems in the Newton–Raphson algorithm. This is likely to occur when the success probabilities are close to one. A constrained likelihood method proposed by Wacholder (1986, American Journal of Epidemiology123, 174–184), also has convergence problems. We propose a quasi-likelihood method of moments technique in which we naively assume the Bernoulli outcome is Poisson, with the mean (success probability) following a log–linear model. We use the Poisson maximum likelihood equations to estimate the regression coefficients without constraints. Using method of moment ideas, one can show that the estimates using the Poisson likelihood will be consistent and asymptotically normal. We apply these methods to a double-blinded randomized trial in primary biliary cirrhosis of the liver (Markus et al., 1989, New England Journal of Medicine320, 1709– 1713).