scispace - formally typeset
Search or ask a question

Showing papers on "Poisson distribution published in 1981"


Book
28 May 1981
TL;DR: In this paper, the authors propose a method to estimate the parameters in normal mixture distributions. But this method is not suitable for counting the number of components in a mixture and cannot be used to detect mixtures of binomial distributions.
Abstract: 1 General introduction.- 1.1 Introduction.- 1.2 Some applications of finite mixture distributions.- 1.3 Definition.- 1.4 Estimation methods.- 1.4.1 Maximum likelihood.- 1.4.2 Bayesian estimation.- 1.4.3 Inversion and error minimization.- 1.4.4 Other methods.- 1.4.5 Estimating the number of components.- 1.5 Summary.- 2 Mixtures of normal distributions.- 2.1 Introduction.- 2.2 Some descriptive properties of mixtures of normal distributions.- 2.3 Estimating the parameters in normal mixture distributions.- 2.3.1 Method of moments estimation.- 2.3.2 Maximum likelihood estimation.- 2.3.3 Maximum likelihood estimates for grouped data.- 2.3.4 Obtaining initial parameter values for the maximum likelihood estimation algorithms.- 2.3.5 Graphical estimation techniques.- 2.3.6 Other estimation methods.- 2.4 Summary.- 3 Mixtures of exponential and other continuous distributions.- 3.1 Exponential mixtures.- 3.2 Estimating exponential mixture parameters.- 3.2.1 The method of moments and generalizations.- 3.2.2 Maximum likelihood.- 3.3 Properties of exponential mixtures.- 3.4 Other continuous distributions.- 3.4.1 Non-central chi-squared distribution.- 3.4.2 Non-central F distribution.- 3.4.3 Beta distributions.- 3.4.4 Doubly non-central t distribution.- 3.4.5 Planck's distribution.- 3.4.6 Logistic.- 3.4.7 Laplace.- 3.4.8 Weibull.- 3.4.9 Gamma.- 3.5 Mixtures of different component types.- 3.6 Summary.- 4 Mixtures of discrete distributions.- 4.1 Introduction.- 4.2 Mixtures of binomial distributions.- 4.2.1 Moment estimators for binomial mixtures.- 4.2.2 Maximum likelihood estimators for mixtures of binomial distributions.- 4.2.3 Other estimation methods for mixtures of binomial distributions.- 4.3 Mixtures of Poisson distributions.- 4.3.1 Moment estimators for mixtures of Poisson distributions.- 4.3.2 Maximum likelihood estimators for a Poisson mixture.- 4.4 Mixtures of Poisson and binomial distributions.- 4.5 Mixtures of other discrete distributions.- 4.6 Summary.- 5 Miscellaneous topics.- 5.1 Introduction.- 5.2 Determining the number of components in a mixture.- 5.2.1 Informal diagnostic tools for the detection of mixtures.- 5.2.2 Testing hypotheses on the number of components in a mixture.- 5.3 Probability density function estimation.- 5.4 Miscellaneous problems.- 5.5 Summary.- References.

1,354 citations


Journal ArticleDOI
TL;DR: In this article, a general method for solving Poisson's equation without shape approximation for an arbitrary periodic charge distribution is presented, based on the concept of multipole potentials and the boundary value problem for a sphere.
Abstract: A general method for solving Poisson’s equation without shape approximation for an arbitrary periodic charge distribution is presented The method is based on the concept of multipole potentials and the boundary value problem for a sphere In contrast to the usual Ewald‐type methods, this method has only absolutely and uniformly convergent reciprocal space sums, and treats all components of the charge density equivalently Applications to band structure calculations and lattice summations are also discussed

233 citations


Journal ArticleDOI
TL;DR: A mixed categorical-continuous variable model is proposed for the analysis of mortality rates and shows that, though a gradient in lung cancer mortality rates exist in space, the gradient is restricted to specific demographic categories identified by race, age and sex.
Abstract: A mixed categorical-continuous variable model is proposed for the analysis of mortality rates. This model differs from other available models, such as weighted least squares and loglinear models, in that the within-cell populations are assumed to be heterogeneous in their levels of mortality risk. Heterogeneity implies that, in addition to the sampling variance considered in other available models, there will be a second component of variance due solely to within-cell heterogeneity. Maximum likelihood procedures are presented for the estimation of the model parameters. These procedures are based on the assumption that the distribution function for each cell death count is the negative binomial probability function. This assumption is shown to be equivalent to assuming a mixture of Poisson processes with the differential risk levels among individuals within each cell being governed by a two-parameter gamma distribution. The model is applied to data on lung cancer mortality for 1970-1975 for the 100 counties of North Carolina. The analysis shows that, though a gradient in lung cancer mortality rates exist in space, the gradient is restricted to specific demographic categories identified by race, age and sex.

108 citations


Journal ArticleDOI
TL;DR: The question of how to characterize the bacterial density in a body of water when data are available as counts from a number of small-volume samples was examined for cases where either the Poisson or negative binomial probability distributions could be used to describe the bacteriological data.
Abstract: The question of how to characterize the bacterial density in a body of water when data are available as counts from a number of small-volume samples was examined for cases where either the Poisson or negative binomial probability distributions could be used to describe the bacteriological data. The suitability of the Poisson distribution when replicate analyses were performed under carefully controlled conditions and of the negative binomial distribution for samples collected from different locations and over time were illustrated by two examples. In cases where the negative binomial distribution was appropriate, a procedure was given for characterizing the variability by dividing the bacterial counts into homogeneous groups. The usefulness of this procedure was illustrated for the second example based on survey data for Lake Erie. A further illustration of the difference between results based on the Poisson and negative binomial distributions was given by calculating the probability of obtaining all samples sterile, assuming various bacterial densities and sample sizes.

89 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that for Poisson or multinomial contingency table data, the conditional distribution is product multi-parameter when conditioning on observed values of explanatory variables.
Abstract: SUMMARY For Poisson or multinomial contingency table data the conditional distribution is product multinomial when conditioning on observed values of explanatory variables. Birch (1963) showed that under the restriction formed by keeping the marginal totals of one margin fixed at their observed values the Poisson, multinomial and product multinomial likelihoods are proportional and give the same estimates for common parameters in the log linear model. Here the inverses of the Fisher information matrices are shown to be identical over common parameters so that the asymptotic covariance matrices of the estimates correspond.

82 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the model that was introduced in Gerber (1974) and consider the compound Poisson process of the aggregate claims, given by the Poisson parameter λ.
Abstract: 1. Introduction and Summary We shall consider the model that was introduced in Gerber (1974). Let {St } denote the compound Poisson process of the aggregate claims (given by the Poisson parameter λ...

82 citations


Journal ArticleDOI
TL;DR: Two summary relative risk estimators, which are analogues of the Mantel-Haenszel summary odds ratio, are derived for use in prospective studies with stratified data and one of the proposed summary relative risks is shown to be closely related to the maximum likelihood estimator of a common risk ratio.

76 citations


Journal ArticleDOI
TL;DR: The hypothesis that overdispersion of the chromosome aberration number per cell results from multiple aberrations per particle traversal is investigated in mathematical terms and a developed formalism provides a method to determine the efficiency of this theory.
Abstract: The hypothesis that overdispersion of the chromosome aberration number per cell results from multiple aberrations per particle traversal is investigated in mathematical terms. At a given absorbed dose, Poisson distributions are assumed both for the number of ionizing particles traversing a cell nucleus and for the number of aberrations induced by a single particle traversal. The resulting distribution of the number of aberrations per cell is the Neyman type A distribution, a special case of the generalized Poisson distribution. This function is generally overdispersed, its relative variance 1 + λ being determined by the expectation value λ of aberrations per particle traversal. Overdispersion therefore increases with the probability of multiple aberrations per particle traversal, typical for high LET, whereas regular dispersion results at low LET. Data from experiments with neutrons and α particles are found to agree with this theory. The developed formalism provides a method to determine the efficiency o...

76 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the forcing function of Poisson's equation for the mean or fluctuating pressure in a turbulent flow can be divided into two parts, one related to the square of the rate of strain and the other to the inverse of the vorticity.
Abstract: It is shown that the ’’forcing function’’ (the right‐hand side) of Poisson’s equation for the mean or fluctuating pressure in a turbulent flow can be divided into two parts, one related to the square of the rate of strain and the other to the square of the vorticity.

59 citations


Journal ArticleDOI
TL;DR: A generalized version of a theorem given by Teicher is used to show that the finite mixtures of the following multivariate distributions are also identifiable: negative binomial, logarithmic series, Poisson, normal, inverse Gaussian, and random walk.
Abstract: Finite mixtures of the following ten families of univariate distributions are shown to be identifiable: logarithmic series, discrete rectangular, rectangular, first law of Laplace, noncentral X^{2} , logistic, generalized logistic, generalized hyperbolic-secant, inverse Gaussian, and random walk. A generalized version of a theorem given by Teicher is used to show that the finite mixtures of the following multivariate distributions are also identifiable: negative binomial, logarithmic series, Poisson, normal, inverse Gaussian, and random walk.

57 citations


Journal ArticleDOI
TL;DR: The Neyman type-A and Thomas counting distributions turn out to provide a good description for the counting of photons generated by multiplied Poisson processes, as long as the time course of the multiplication is short compared with the counting time.
Abstract: The Neyman type-A and Thomas counting distributions provide a useful description for a broad variety of phenomena from the distribution of larvas on small plots of land to the distribution of galaxies in space. They turn out to provide a good description for the counting of photons generated by multiplied Poisson processes, as long as the time course of the multiplication is short compared with the counting time. Analytic expressions are presented for the probability distributions, moment generating functions, moments, and variance-to-mean ratios. Sums of Neyman type-A and Thomas random variables are shown to retain their form under the constraint of constant multiplication parameter. Conditions under which the Neyman type-A and Thomas converge in distribution to the fixed multiplicative Poisson and to the Gaussian are presented. This latter result is most important for it provides a ready solution to likelihood-ratio detection, estimation, and discrimination problems in the presence of many kinds of signal and noise. The doubly stochastic Neyman type-A, Thomas, and fixed multiplicative Poisson distributions are also considered. A number of explicit applications are presented. These include (1) the photon counting scintillation detection of nuclear particles, when the particle flux is low, (2) the photon counting detection of weak optical signals in the presence of ionizing radiation, (3) the design of a star-scanner spacecraft guidance system for the hostile environment of space, (4) the neural pulse counting distribution in the cat retinal ganglion cell at low light levels, and (5) the transfer of visual signal to the cortex in a classical psychophysics experiment. A number of more complex contagious distributions arising from multiplicative processes are also discussed, with particular emphasis on photon counting and direct-detection optical communications.

Journal ArticleDOI
TL;DR: In this paper, the effect of variance stabilizing transformations on the significance level and power of the F test, applied to Binomial and Poisson variables, was examined by a Monte Carlo study.
Abstract: The effect of the variance stabilizing transformations on the significance level and power of the F test, applied to Binomial and Poisson variables, was examined by a Monte Carlo study. The results...

Journal ArticleDOI
Richard Barakat1
TL;DR: In this article, the complex amplitude at a point in a speckle pattern that is due to a weak scatterer is modeled as the superposition of N sinusoidal waves of random phase, with the probability density of these phases given by the nonuniform von Mises rather than by the uniform one that characterizes a strong scatterers.
Abstract: The complex amplitude at a point in a speckle pattern that is due to a weak scatterer is modeled as the superposition of N sinusoidal waves of random phase, with the probability density of these phases given by the nonuniform von Mises rather than by the uniform one that characterizes a strong scatterer. Explicit formulas are obtained for both intensity and total phase statistics in terms of a single parameter directly related to the density function of the constituent phasors. The case in which, in addition, N itself is random (governed by a Poisson distribution with mean value 〈N〉) is also studied.

Journal ArticleDOI
TL;DR: In this article, a linear regression is conducted between the first vertical derivative of gravity and the magnetic values reduced to the pole within each window position, and three parameters are generated at each data interval that describe the internal correlation existing between gravity and magnetic anomalies and may yield information regarding anomaly source properties.
Abstract: Use of Poisson’s theorem to calculate single‐source magnetization‐to‐density ratios may be extended to multisource data by repeatedly applying the theorem within a small, moving data window. In this approach, the moving window traverses the data set in increments of the data interval, and a linear regression is conducted between the first vertical derivative of gravity and the magnetic values reduced to the pole within each window position. Three parameters—correlation coefficient, slope, and intercept—are thereby generated at each data interval that describe the internal correlation existing between gravity and magnetic anomalies and may yield information regarding anomaly source properties. The slope parameter provides an estimate of source magnetization‐to‐density ratio for the anomaly segments within each window position.

Journal ArticleDOI
Luc Devroye1
TL;DR: An exact method for the generation of Poisson random variables on a computer is presented and the average time required per random variate decreases as the Poisson parameter tends to infinity.
Abstract: An exact method for the generation of Poisson random variables on a computer is presented. The average time required per random variate decreases as the Poisson parameter tends to infinity.

01 Dec 1981
TL;DR: A new algorithm is developed that is exact, has execution time insensitive to the value of the mean, and is valid whenever the mean is greater than ten, and this algorithm is compared to the three other algorithms which have been developed recently for generating Poisson variates when themean is large.
Abstract: : Approximate algorithms have long been the only available methods for generating Poisson random variates when the mean is large. A new algorithm is developed that is exact, has execution time insensitive to the value of the mean, and is valid whenever the mean is greater than ten. This algorithm is compared to the three other algorithms which have been developed recently for generating Poisson variates when the mean is large. Criteria used are set-up time, marginal execution time, memory requirements, and lines of code. New simple tight bounds on Poisson probabilities contribute to the speed of the algorithm, but are useful in a general context. In addition, Poisson variate generation is surveyed. (Author)

Journal ArticleDOI
TL;DR: In this article, it was shown that a theorem of Hooke relating the stationary virtual and actual waiting-time distributions for the GI/G/1 queue extends to the periodic Poisson model; it was then pointed out that Hooke's theorem leads to the extension of a related theorem of Takaics.
Abstract: This paper is concerned with asymptotic results for a single-server queue having periodic Poisson input and general service-time distribution, and carries forward the analysis of this model initiated in Harrison and Lemoine [3]. First, it is shown that a theorem of Hooke relating the stationary virtual and actual waiting-time distributions for the GI/G/1 queue extends to the periodic Poisson model; it is then pointed out that Hooke's theorem leads to the extension (developed in [3]) of a related theorem of Takaics. Second, it is demonstrated that the asymptotic distribution for the server-load process at a fixed 'time of day' coincides with the distribution for the supremum, over the time horizon [0, o0), of the sum of a stationary compound Poisson process with negative drift and a continuous periodic function. Some implications of this characterization result for the computation and approximation of the asymptotic distributions are then discussed, including a direct proof, for the periodic Poisson case, of a recent result of Rolski [6] comparing mean asymptotic customer waiting time with that of a corresponding M/G/1 system.

Journal ArticleDOI
TL;DR: In this paper, the combination of probability is used to obtain tolerance intervals for Poisson or binomial variables in quality control applications, and the construction of the desired interval is shown.
Abstract: Quality control applications sometimes require one to obtain tolerance intervals for Poisson or binomial variables. Situations of this type are described and the construction of the desired interval is shown. This involves the combination of probability..

Journal ArticleDOI
TL;DR: In this paper, a method for the solution of Poisson's equation on the surface of a sphere is given, which makes use of truncated double Fourier series expansions on the sphere and invokes the Galerkin approximation.
Abstract: A method for the solution of Poisson's equation on the surface of a sphere is given. The method makes use of truncated double Fourier series expansions on the sphere and invokes the Galerkin approximation. It has an operation count of approximately I2J2(1 + log2J) for a latitude-longitude grid containing 2J(J − 1) + 2 data points. Numerical results are presented to demonstrate the method's accuracy and efficiency.

Journal ArticleDOI
Richard S. Hemmert1
TL;DR: In this article, a negative binomial distribution is used to model the yield vs area curve, which can be successfully modeled by partitioning the wafer into several Poisson subareas of different defect densities.
Abstract: Initial integrated circuit yield predictions were overly pessimistic because the assumption that defects were a homogeneous random population led to the logarithm of yield linearly declining with increasing chip area. In reality, the yield vs area curve is concave up, which can be successfully modeled by partitioning the wafer into several Poisson subareas of different defect densities. Previously, this partitioning was done by “eye”. Here an algorithm has been developed to do the partitioning. Good results over a wide range of yields have been obtained. For the particular data presented, the yield curves in the range of interest can be described by a negative binomial distribution, which implies the underlying defect density is governed by the gamma distribution. As previously anticipated, both led to overpredictions of the yield for large chip areas.

Journal ArticleDOI
TL;DR: For the p -variate Poisson mean, under the sum of weighted squared error losses, weights being reciprocals of variances, a class of proper Bayes minimax estimates dominating the usual estimate, namely the sample mean is produced as discussed by the authors.

Book ChapterDOI
01 Jan 1981
TL;DR: In this article, a large part of this chapter will involve mixtures in which the components are binomial or Poisson distributions, since these have been extensively studied in the literature.
Abstract: In this chapter, mixtures of certain discrete distributions will be considered. As with mixtures of normals, Pearson (1915) appears to have been the first to study such distributions in any detail, deriving moment estimators for the parameters in a mixture of binomial density functions. A large part of this chapter will involve mixtures in which the components are binomial or Poisson distributions, since these have been extensively studied in the literature. However, mixtures of other discrete distributions will be considered briefly, in particular the multivariate Bernoulli distribution which, as noted by Wolfe (1969), may be used as the basis of latent class analysis (see Lazarsfeld, 1968). We begin by considering mixtures of binomial distributions.

Journal ArticleDOI
TL;DR: In this article, a rule for sampling interval is proposed, based on duration of developmental time, which is applicable when the Poisson assumption is replaced with a negative binomial, simulating sampling from spatially aggregated populations.
Abstract: The egg-ratio technique of Edmondson is commonly used to estimate rates of observed change (r), birth (b), and death (d) in zooplankton populations. If it is assumed that samples provide estimates of population size and egg number that follow a Poisson distribution, r, b, and d are approximately normally distributed. This is demonstrated with theory and checked with extensive computer simulation. The theory is applicable when the Poisson assumption is replaced with a negative binomial, simulating sampling from spatially aggregated populations. A rule for sampling interval is proposed, based on duration of developmental time. The t-distribution can be used to construct confidence intervals for r, b, and d, and to test for differences between sample estimates of these population parameters.

01 Jan 1981
TL;DR: In this article, a simple random sample of size t is drawn with replacement from a finite population consisting of n elements and the construction of con- fidence intervals for n and a variance test for the underlying model are investigated asymptotically.
Abstract: From a finite population consisting of n elements a simple random sample of size t is drawn with replacement. In certain situations n is unknown. The construction of con- fidence intervals for n and a variance test for the underlying model are investigated asymptotically. The results are also relevant for incomplete Poisson samples. Finally, an applica- tion is discussed.

Journal ArticleDOI
TL;DR: In this article, an effective Poisson's ratio for anisotropic materials is introduced, defined as the negative ratio of transverse and longitudinal strains averaged over all transverse directions.
Abstract: An effective Poisson's ratio is introduced for anisotropic materials as the negative ratio of transverse and longitudinal strains averaged over all transverse directions. It is shown that for certain orientations of the applied force this effective Poisson's ratio assumes negative values for $\ensuremath{\alpha}$-quartz. This implies that such a bar increases its cross section under length extension.


Journal ArticleDOI
TL;DR: It appears unlikely that, with a data set from any single available source, a specific etiologic hypothesis for the maternal age dependence of Down's syndrome can be clearly inferred by the use of these or similar regression models.
Abstract: The maternal age dependence of Down's syndrome rates was analyzed by two mathematical models, a discontinuous (DS) slope model which fits different exponential equations to different parts of the 20–49 age interval and a CPE model which fits a function that is the sum of a constant and exponential term over this whole 20–49 range. The CPE model had been considered but rejected by Penrose, who preferred models postulating changes with age assuming either a power function X10, where X is age or a Poisson model in which accumulation of 17 events was the assumed threshold for the occurrence of Down's syndrome. However, subsequent analyses indicated that the two models preferred by Penrose did not fit recent data sets as well as the DS or CPE model. Here we report analyses of broadened power and Poisson models in which n (the postulated number of independent events) can vary. Five data sets are analyzed. For the power models the range of the optimal n is 11 to 13; for the Poisson it is 17 to 25. The DS, Poisson, and power models each give the best fit to one data set; the CPE, to two sets. No particular model is clearly preferable. It appears unlikely that, with a data set from any single available source, a specific etiologic hypothesis for the maternal age dependence of Down's syndrome can be clearly inferred by the use of these or similar regression models.

ReportDOI
01 Jul 1981
TL;DR: In this paper, the one-dimensional Poisson-Vlasov equations are cast into Hamiltonian form and a Poisson Bracket in terms of the phase space density, as the sole dynamical variable, is presented.
Abstract: The one-dimensional Poisson-Vlasov equations are cast into Hamiltonian form. A Poisson Bracket in terms of the phase space density, as sole dynamical variable, is presented. This Poisson bracket is not of the usual form, but possesses the commutator properties of antisymmetry, bilinearity, and nonassociativity by virtue of the Jacobi requirement. Clebsch potentials are seen to yield a conventional (canonical) formulation. This formulation is discretized by expansion in terms of an arbitrary complete set of basis functions. In particular, a wave field representation is obtained.

Journal ArticleDOI
Larry Lee1, R. Pirie1
TL;DR: In this article, a new graphical method for comparing trends in two or more series of events is described, where the series are assumed independent of one another and are assumed to follow nonhomogeneous Poisson processes with mean functions and rate functions unspeci-fied.
Abstract: This paper describes a new graphical method for comparing trends in two or more series of events. The method is applicable,e.g., to observations such as the successive times to failure of two or more devices and the arrival times on succeeding days at a service facility. The graphs provide information about the rela-tive trend and can be used to order several series with respect to rates of occurrence of events. The series are assumed independent of one another and are assumed to follow nonhomogeneous Poisson processes with mean functions and rate functions unspeci-fied.

Journal ArticleDOI
TL;DR: The probability generating functional representation of a multidimensional Poisson cluster process is utilized to derive a formula for its likelihood function, but the prohibitive complexity of this formula precludes its practical application to statistical inference.
Abstract: The probability generating functional representation of a multidimensional Poisson cluster process is utilized to derive a formula for its likelihood function, but the prohibitive complexity of this formula precludes its practical application to statistical inference. In the case of isotropic processes, it is however feasible to compute functions such as the probability Q(r) of finding no point in a disc of radius r and the probability Q(r | 0) of nearest-neighbor distances greater than r, as well as the expected number C(r | 0) of points at a distance less than r from a given point. Explicit formulas and asymptotic developments are derived for these functions in the n-dimensional case. These can effectively be used as tools for statistical analysis.