scispace - formally typeset
Search or ask a question

Showing papers on "Poisson distribution published in 2000"


Journal ArticleDOI
TL;DR: In this article, the use of regression models based on the Poisson distribution was introduced as a tool for resolving common problems in analyzing aggregate crime rates, when the population size of an aggregate unit is small relative to the offense rate, crime rates must be computed from a small number of offenses.
Abstract: This article introduces the use of regression models based on the Poisson distribution as a tool for resolving common problems in analyzing aggregate crime rates. When the population size of an aggregate unit is small relative to the offense rate, crime rates must be computed from a small number of offenses. Such data are ill-suited to least-squares analysis. Poisson-based regression models of counts of offenses are preferable because they are built on assumptions about error distributions that are consistent with the nature of event counts. A simple elaboration transforms the Poisson model of offense counts to a model of per capita offense rates. To demonstrate the use and advantages of this method, this article presents analyses of juvenile arrest rates for robbery in 264 nonmetropolitan counties in four states. The negative binomial variant of Poisson regression effectively resolved difficulties that arise in ordinary least-squares analyses.

1,096 citations


Journal ArticleDOI
TL;DR: In this article, the authors adapt Lambert's methodology to an upper bounded count situation, thereby obtaining a zero-inflated binomial (ZIP) model, and add to the flexibility of these fixed effects models by incorporating random effects so that, e.g., the within-subject correlation and between-subject heterogeneity typical of repeated measures data can be accommodated.
Abstract: Summary. In a 1992 Technometrics paper, Lambert (1992, 34, 1–14) described zero-inflated Poisson (ZIP) regression, a class of models for count data with excess zeros. In a ZIP model, a count response variable is assumed to be distributed as a mixture of a Poisson(λ) distribution and a distribution with point mass of one at zero, with mixing probability p. Both p and λ are allowed to depend on covariates through canonical link generalized linear models. In this paper, we adapt Lambert's methodology to an upper bounded count situation, thereby obtaining a zero-inflated binomial (ZIP) model. In addition, we add to the flexibility of these fixed effects models by incorporating random effects so that, e.g., the within-subject correlation and between-subject heterogeneity typical of repeated measures data can be accommodated. We motivate, develop, and illustrate the methods described here with an example from horticulture, where both upper bounded count (binomial-type) and unbounded count (Poisson-type) data with excess zeros were collected in a repeated measures designed experiment.

829 citations


Journal ArticleDOI
Abstract: Summary This paper describes a technique for computing approximate maximum pseudolikelihood estimates of the parameters of a spatial point process. The method is an extension of Berman & Turner’s (1992) device for maximizing the likelihoods of inhomogeneous spatial Poisson processes. For a very wide class of spatial point process models the likelihood is intractable, while the pseudolikelihood is known explicitly, except for the computation of an integral over the sampling region. Approximation of this integral by a finite sum in a special way yields an approximate pseudolikelihood which is formally equivalent to the (weighted) likelihood of a loglinear model with Poisson responses. This can be maximized using standard statistical software for generalized linear or additive models, provided the conditional intensity of the process takes an ‘exponential family’ form. Using this approach a wide variety of spatial point process models of Gibbs type can be fitted rapidly, incorporating spatial trends, interaction between points, dependence on spatial covariates, and mark information.

358 citations


Journal ArticleDOI
TL;DR: It is shown that although the discrepancy between the Beta Poisson formula and the exact function is not very large for many data sets, the differences are greatest at low doses--the region of interest for many risk applications.
Abstract: The choice of a dose-response model is decisive for the outcome of quantitative risk assessment. Single-hit models have played a prominent role in dose-response assessment for pathogenic microorganisms, since their introduction. Hit theory models are based on a few simple concepts that are attractive for their clarity and plausibility. These models, in particular the Beta Poisson model, are used for extrapolation of experimental dose-response data to low doses, as are often present in drinking water or food products. Unfortunately, the Beta Poisson model, as it is used throughout the microbial risk literature, is an approximation whose validity is not widely known. The exact functional relation is numerically complex, especially for use in optimization or uncertainty analysis. Here it is shown that although the discrepancy between the Beta Poisson formula and the exact function is not very large for many data sets, the differences are greatest at low doses--the region of interest for many risk applications. Errors may become very large, however, in the results of uncertainty analysis, or when the data contain little low-dose information. One striking property of the exact single-hit model is that it has a maximum risk curve, limiting the upper confidence level of the dose-response relation. This is due to the fact that the risk cannot exceed the probability of exposure, a property that is not retained in the Beta Poisson approximation. This maximum possible response curve is important for uncertainty analysis, and for risk assessment of pathogens with unknown properties.

348 citations


Journal ArticleDOI
01 Apr 2000-Genetics
TL;DR: This work introduces a parametric model that relaxes the molecular clock by allowing rates to vary across lineages according to a compound Poisson process and uses Markov chain Monte Carlo integration to evaluate the posterior probability distribution.
Abstract: The molecular clock hypothesis remains an important conceptual and analytical tool in evolutionary biology despite the repeated observation that the clock hypothesis does not perfectly explain observed DNA sequence variation. We introduce a parametric model that relaxes the molecular clock by allowing rates to vary across lineages according to a compound Poisson process. Events of substitution rate change are placed onto a phylogenetic tree according to a Poisson process. When an event of substitution rate change occurs, the current rate of substitution is modified by a gamma-distributed random variable. Parameters of the model can be estimated using Bayesian inference. We use Markov chain Monte Carlo integration to evaluate the posterior probability distribution because the posterior probability involves high dimensional integrals and summations. Specifically, we use the Metropolis-Hastings-Green algorithm with 11 different move types to evaluate the posterior distribution. We demonstrate the method by analyzing a complete mtDNA sequence data set from 23 mammals. The model presented here has several potential advantages over other models that have been proposed to relax the clock because it is parametric and does not assume that rates change only at speciation events. This model should prove useful for estimating divergence times when substitution rates vary across lineages.

333 citations


Journal ArticleDOI
TL;DR: Three theorems are introduced for characterizing limits of probabilities in Poisson games when the expected number of players becomes large, which are applied to derive formulas for pivot probabilities in binary elections, and to analyze a voting game that was studied by Ledyard.

309 citations


Journal ArticleDOI
TL;DR: An overview of statistical and probabilistic properties of words, as occurring in the analysis of biological sequences, and special emphasis lies on disentangling the complicated dependence structure between word occurrences, due to self-overlap as well as due to overlap between words.
Abstract: In the following, an overview is given on statistical and probabilistic properties of words, as occurring in the analysis of biological sequences. Counts of occurrence, counts of clumps, and renewal counts are distinguished, and exact distributions as well as normal approximations, Poisson process approximations, and compound Poisson approximations are derived. Here, a sequence is modelled as a stationary ergodic Markov chain; a test for determining the appropriate order of the Markov chain is described. The convergence results take the error made by estimating the Markovian transition probabilities into account. The main tools involved are moment generating functions, martingales, Stein’s method, and the Chen-Stein method. Similar results are given for occurrences of multiple patterns, and, as an example, the problem of unique recoverability of a sequence from SBH chip data is discussed. Special emphasis lies on disentangling the complicated dependence structure between word occurrences, due to self-over...

259 citations


Journal ArticleDOI
TL;DR: Results are presented for a model in which storm centres arrive in a homogeneous Poisson process in space-time, and cells follow them in time according to a Bartlett–Lewis type cluster.
Abstract: Over a decade ago, point rainfall models based upon Poisson cluster processes were developed by Rodriguez-Iturbe, Cox and Isham. Two types of point process models were envisaged: the Bartlett–Lewis and the Neyman–Scott rectangular pulse models. Recent developments are reviewed here, including a number of empirical studies. The parameter estimation problem is addressed for both types of Poisson-cluster based models. The multiplicity of parameters which can be obtained for a given data set using the method of moments is illustrated and two approaches to finding a best set of parameters are presented. The use of a proper fitting method will allow for the problems encountered in regionalisation to be adequately dealt with. Applications of the point process model to flood design are discussed and finally, results for a model with dependent cell depth and duration are given. Taking into account the spatial features of rainfall, three multi-site models are presented and compared. They are all governed by a master Poisson process of storm origins and have a number of cell origins associated with each storm origin. The three models differ as to the type of dependence structure between the cell characteristics at different sites. Analytical properties are presented for these models and their ability to represent the spatial structure of a set of raingauge data in the South-West of England is examined. Continuous spatial-temporal models are currently being developed and results are presented for a model in which storm centres arrive in a homogeneous Poisson process in space-time, and cells follow them in time according to a Bartlett–Lewis type cluster. Examples of simulations using this model are shown and compared with radar data from the South-West of England. The paper concludes with a summary of the main areas in which further research is required.

255 citations


Journal ArticleDOI
Helge Blaker1
TL;DR: In this article, the authors describe a method for improving standard exact confidence intervals in discrete distributions with respect to size while retaining the correct level, using a natural nesting condition: if α < α, the 1 - α' confidence interval is included in the 1 − α interval.
Abstract: The author describes a method for improving standard “exact” confidence intervals in discrete distributions with respect to size while retaining correct level. The binomial, negative binomial, hypergeometric, and Poisson distributions are considered explicitly. Contrary to other existing methods, the author's solution possesses a natural nesting condition: if α < α', the 1 - α' confidence interval is included in the 1 - α interval. Nonparametric confidence intervals for a quantile are also considered.

213 citations


Proceedings Article
10 Apr 2000
TL;DR: A survey of some old and new results in Poisson approximation is presented with special emphasis on algorithmic techniques for analytic functions.
Abstract: Poisson approximation extends the Bernstein-Bezier scheme from polynomials to arbitrary analytic functions. A survey of some old and new results in Poisson approximation is presented with special emphasis on algorithmic techniques for analytic functions.

207 citations


Journal ArticleDOI
TL;DR: In this paper, two estimators of the mean function of a counting process based on "panel count data" are studied. And the authors show that the estimator proposed by Sun and Kalbfleisch can be viewed as a pseudo-maximum likelihood estimator when a nonhomogeneous Poisson process model is assumed for the counting process.
Abstract: We study two estimators of the mean function of a countingprocess based on “panel count data.” The setting for “panel count data” is one in which $n$ independent subjects, each with a counting process with common mean function, are observed at several possibly different times duringa study. Following a model proposed by Schick and Yu, we allow the number of observation times, and the observation times themselves, to be random variables. Our goal is to estimate the mean function of the counting process. We show that the estimator of the mean function proposed by Sun and Kalbfleisch can be viewed as a pseudo-maximum likelihood estimator when a non-homogeneous Poisson process model is assumed for the counting process. We establish consistency of both the nonparametric pseudo maximum likelihood estimator of Sun and Kalbfleisch and the full maximum likeli- hood estimator, even if the underlying counting process is not a Poisson process.We also derive the asymptotic distribution of both estimators at a fixed time $t$, and compare the resulting theoretical relative efficiency with finite sample relative efficiency by way of a limited Monte-Carlo study.

Journal ArticleDOI
01 May 2000-Topology
TL;DR: In this paper, it was shown that under certain mild assumptions a Lie bialgebroid integrates to a Poisson groupoid, a theorem of Karasev and of Weinstein.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss contravariant connections on Poisson manifolds and show that these connections play an important role in the study of global properties of Poisson manifold, and use them to define Poisson holonomy and new invariants.
Abstract: We discuss contravariant connections on Poisson manifolds. For vector bun­ dles, the corresponding operational notion of a contravariant derivative had been introduced by I. Vaisman. We show that these connections play an important role in the study of global properties of Poisson manifolds and we use them to define Poisson holonomy and new invariants of Poisson manifolds.

Journal ArticleDOI
TL;DR: Adopting the expectation-maximization (EM) algorithm for use in computing the maximum a posteriori (MAP) estimate corresponding to the model, it is found that the model permits remarkably simple, closed-form expressions for the EM update equations.
Abstract: This paper describes a statistical multiscale modeling and analysis framework for linear inverse problems involving Poisson data The framework itself is founded upon a multiscale analysis associated with recursive partitioning of the underlying intensity, a corresponding multiscale factorization of the likelihood (induced by this analysis), and a choice of prior probability distribution made to match this factorization by modeling the "splits" in the underlying partition The class of priors used here has the interesting feature that the "noninformative" member yields the traditional maximum-likelihood solution; other choices are made to reflect prior belief as to the smoothness of the unknown intensity Adopting the expectation-maximization (EM) algorithm for use in computing the maximum a posteriori (MAP) estimate corresponding to our model, we find that our model permits remarkably simple, closed-form expressions for the EM update equations The behavior of our EM algorithm is examined, and it is shown that convergence to the global MAP estimate can be guaranteed Applications in emission computed tomography and astronomical energy spectral analysis demonstrate the potential of the new approach

Journal ArticleDOI
TL;DR: In this article, the authors considered a Poisson process with random intensity for which the distribution of intervals between jumps is described by an equation with fractional derivatives and obtained the generating function of the jump number in explicit form.
Abstract: We consider a Poisson process with random intensity for which the distribution of intervals between jumps is described by an equation with fractional derivatives. The distribution of random intensity of this process and the generating function of the jump number are obtained in explicit form. It is noted that the studied fractional Poisson law can be used for statistical description of chaotic processes of different physical nature demonstrating the phenomenon of anomalous diffusion.

Journal ArticleDOI
TL;DR: In this article, the accuracy of energies and forces computed with a generalized Born (GB) model and the distance-dependent dielectric (DDD) model with respect to detailed finite solutions of the Poisson equation (FDPE) was evaluated for a small molecule in solution and for HIV-1 protease with inhibitor, KNI•272.
Abstract: This study characterizes the accuracy of energies and forces computed with a generalized Born (GB) model and the distance‐dependent dielectric (DDD) model with respect to detailed finite solutions of the Poisson equation (FDPE). Tests are done for a small molecule in solution and for HIV‐1 protease with inhibitor, KNI‐272. GB agrees well with FDPE for the small molecule, but less well for the protein system. The correlation between GB and FDPE energies is poorest in calculations of changes upon binding. Also, forces computed with the GB model are less accurate than energies. The DDD model is far less accurate than GB. Nanosecond stochastic dynamics simulations of HIV‐1 protease with an empty active site are used to examine the consequence of the models for the conformational preferences of the active site. Interestingly, the active site flaps remain near their starting conformations in the FDPE and GB simulations but collapse into the active site in the DDD simulation. © 2000 John Wiley & Sons, Inc. J Comput Chem 21: 295–309, 2000

Journal ArticleDOI
TL;DR: In this paper, an EM algorithm for estimating the Poisson parameters of zero-inflated, zero-deflated, and standard Poisson models, when the zero observations are ignored, is presented.

Proceedings ArticleDOI
26 Mar 2000
TL;DR: Evidence that, despite being non-Poisson, aggregating Web traffic causes it to smooth out as rapidly as Poisson traffic is presented, and the conclusion that variance changes linearly with mean bandwidth is useful to anyone provisioning a network for a large aggregate load of Web traffic.
Abstract: If data traffic were Poisson, increases in the amount of traffic aggregated on a network would rapidly decrease the relative size of bursts. The discovery of pervasive long-range dependence demonstrates that real network traffic is burstier than any possible Poisson model. We present evidence that, despite being non-Poisson, aggregating Web traffic causes it to smooth out as rapidly as Poisson traffic. That is, the relationship between changes in mean bandwidth and changes in variance is the same for Web traffic as it is for Poisson traffic. We derive our evidence from traces of real traffic in two ways: first, by observing how variance changes over the large range of mean bandwidths present in 24-hour traces; second, by observing the relationship of variance and mean bandwidth for individual users and combinations of users. Our conclusion, that variance changes linearly with mean bandwidth, should be useful (and encouraging) to anyone provisioning a network for a large aggregate load of Web traffic.

Journal ArticleDOI
TL;DR: The prediction method making use of the Poisson assumption appeared to be the most reliable of the three approaches and the estimator of the length of the prediction interval produced by this method has the smallest coverage error and is the most precise.
Abstract: The paper compares three different methods for performing disease incidence prediction based on simple interpolation techniques. The first method assumes that the age-period specific numbers of observed cases follow a Poisson distribution and the other two methods assume a normal distribution for the incidence rates. The main emphasis of the paper is on assessing the reliability of the three methods. For this purpose, ex post predictions produced by each method are checked for different cancer sites using data from the Cancer Control Region of Turku in Finland. In addition, the behaviour of the estimators of predicted expected values and prediction intervals, crucial for investigation of the reliability of prediction, are assessed using a simulation study. The prediction method making use of the Poisson assumption appeared to be the most reliable of the three approaches. The simulation study found that the estimator of the length of the prediction interval produced by this method has the smallest coverage error and is the most precise.

Journal ArticleDOI
TL;DR: In this article, the authors present the explicit solution of the Bayesian problem of sequential testing of two simple hypotheses about the intensity of an observed Poisson process, which consists of reducing the initial problem to a free-boundary differential-difference problem, and solving the latter by use of the principles of smooth and continuous fit.
Abstract: We present the explicit solution of the Bayesian problem of sequential testing of two simple hypotheses about the intensity of an observed Poisson process. The method of proof consists of reducing the initial problem to a free-boundary differential-difference Stephan problem, and solving the latter by use of the principles of smooth and continuous fit. A rigorous proof of the optimality of the Wald’s sequential probability ratio test in the variational formulation of the problem is obtained as a consequence of the solution of the Bayesian problem.

Journal ArticleDOI
TL;DR: A method is suggested for predicting the distribution of scores in international soccer matches, treating each team’s goals scored as independent Poisson variables dependent on the Fédération Intemationale de Football Association rating of each team, and the match venue.
Abstract: In this paper a method is suggested for predicting the distribution of scores in international soccer matches, treating each team’s goals scored as independent Poisson variables dependent o...

Journal ArticleDOI
TL;DR: In this article, the authors consider the number of independent samples required until the first repeated value is seen and derive exact and asymptotic formulae for the distribution of this time and of the times until subsequent repeats.
Abstract: Given an arbitrary distribution on a countable set, consider the number of independent samples required until the first repeated value is seen. Exact and asymptotic formulae are derived for the distribution of this time and of the times until subsequent repeats. Asymptotic properties of the repeat times are derived by embedding in a Poisson process. In particular, necessary and sufficient conditions for convergence are given and the possible limits explicitly described. Under the same conditions the finite dimensional distributions of the repeat times converge to the arrival times of suitably modified Poisson processes, and random trees derived from the sequence of independent trials converge in distribution to an inhomogeneous continuum random tree.

Journal ArticleDOI
TL;DR: A new way of analysing the moments of the counting process for a counter system affected by various models of deadtime related to PET and SPECT imaging is presented and the suitability of the Poisson statistical model assumed in most statistical image reconstruction algorithms is studied.
Abstract: The statistics of photon counting by systems affected by deadtime are potentially important for statistical image reconstruction methods. We present a new way of analysing the moments of the counting process for a counter system affected by various models of deadtime related to PET and SPECT imaging. We derive simple and exact expressions for the first and second moments of the number of recorded events under various models. From our mean expression for a SPECT deadtime model, we derive a simple estimator for the actual intensity of the underlying Poisson process; simulations show that our estimator is unbiased even for extremely high count rates. From this analysis, we study the suitability of the Poisson statistical model assumed in most statistical image reconstruction algorithms. For systems containing 'modules' with several detector elements, where each element can cause deadtime losses for the entire module, such as block PET detectors or Anger cameras, the Poisson statistical model appears to be adequate even in the presence of deadtime losses.

Journal ArticleDOI
TL;DR: In this article, the authors proposed to use A-estimators that are adapted to the numerator, where the A estimator is a function of distance r, and also the A-Estimators should depend on r and could then be called A(r)-estimator.
Abstract: Ripley's K function, the L function and the pair correlation function are important second order characteristics of spatial point processes. These functions are usually estimated by ratio estimators, where the numerators are Horvitz-Thompson edge corrected estimators and the denominators estimate the intensity or its square. It is possible to improve these estimators with respect to bias and estimation variance by means of adapted distance dependent intensity estimators. Further improvement is possible by means of refined estimators of the square of intensity. All this is shown by statistical analysis of simulated Poisson, cluster and hard core processes. Many estimators of distributional characteristics of stationary point processes have the form of quotients. Important examples are the usual estimators of Ripley's K function and of the pair correlation function g, see Stoyan et al. (1995) and sections 3 and 4 for explanation. These estimators are constructed so that the numerator is an unbiased estimator of some characteristic, while the denominator estimates the intensity A or its square. The unbiased- ness of the numerator is obtained by means of edge correction. Usually, it is of Horvitz- Thompson type (Horvitz & Thompson, 1952; Overton & Stehman, 1995; Baddeley, 1998). The price of unbiasedness via edge correction is typically a large variance, and usually one has to accept that also for unbiased denominators the ratio estimators are biased and have considerable variances. For the particular cases of K and L functions and of pair correlation functions, heuristic modifications of standard estimators have been suggested, which try to improve the quality of estimators, see e.g. Davis & Peebles (1983), Doguwa & Upton (1989) and Stein (1991, 1993). These modifications are either ad hoc modifications suitable for particular point process models or partial solutions already in the spirit of the present paper. The work by Hamilton (1993), Landy & Szalay (1993) and Picka (1997) suggests a better, more systematic approach, which is applicable to g, K, and L and also to further characteristics not discussed here. The main idea is to use A-estimators that are adapted to the numerator; when the numerator is a function of distance r, also the A-estimators should depend on r and could then be called A(r)-estimators. If then, for fixed r, both parts of the ratio estimator are positively correlated, one can expect reduction of bias as well as of variance. This idea was applied by Hamilton in the case of estimating the pair correlation function in R3 and by Picka for improving the quality of estimation of random set characteristics such as the correlation function, where it is useful to work with adapted volume fraction estimators depending on distance r.

Journal ArticleDOI
TL;DR: The counter model is generalized to allow the Poisson event rate to vary with time and closed-form expressions are obtained for response probabilities under a proportional-rates assumption and for mean RT under conditions in which the integrated event rate increases as an arbitrary power of time.
Abstract: An important class of sequential-sampling models for response time (RT) assumes that evidence for competing response alternatives accrues in parallel and that a response is made when the evidence total for a particular response exceeds a criterion. One member of this class of models is the Poisson counter model, in which evidence accrues in unit increments and the waiting time between increments is exponentially distributed. This paper generalizes the counter model to allow the Poisson event rate to vary with time. General expressions are obtained for the RT distributions for the two- and the m-alternative cases. Closed-form expressions are obtained for response probabilities under a proportional-rates assumption and for mean RT under conditions in which the integrated event rate increases as an arbitrary power of time. An application in the area of early vision is described, in which the Poisson event rates are proportional to the outputs of sustained and transient channels.

Journal ArticleDOI
TL;DR: A method was developed to estimate divergence times using loci that may be overdispersed, and a model consistent with a Cambrian origination of the animal phyla, although significantly less likely than a much older divergence, fitted the data well.
Abstract: Molecular loci that fail relative-rate tests are said to be ‘‘overdispersed.’’ Traditional molecular-clock approaches to estimating divergence times do not take this into account. In this study, a method was developed to estimate divergence times using loci that may be overdispersed. The approach was to replace the traditional Poisson process assumption with a more general stationary process assumption. A probability model was developed, and an accompanying computer program was written to find maximum-likelihood estimates of divergence times under both the Poisson process and the stationary process assumptions. In simulation, it was shown that confidence intervals under the traditional Poisson assumptions often vastly underestimate the true confidence limits for overdispersed loci. Both models were applied to two data sets: one from land plants, the other from the higher metazoans. In both cases, the traditional Poisson process model could be rejected with high confidence. Maximum-likelihood analysis of the metazoan data set under the more general stationary process suggested that their radiation occurred well over a billion years ago, but confidence intervals were extremely wide. It was also shown that a model consistent with a Cambrian (or nearly Cambrian) origination of the animal phyla, although significantly less likely than a much older divergence, fitted the data well. It is argued that without an a priori understanding of the variance in the time between substitutions, molecular data sets may be incapable of ever establishing the age of the metazoan radiation.

Journal ArticleDOI
TL;DR: Under a given condition on correlation coefficients, it is found that correlated Poisson processes can be decomposed into independentPoisson processes.
Abstract: For the integrate-and-fire model with or without reversal potentials, we consider how correlated inputs affect the variability of cellular output. For both models, the variability of efferent spike trains measured by coefficient of variation (CV) of the interspike interval is a nondecreasing function of input correlation. When the correlation coefficient is greater than 0.09, the CV of the integrate-and-fire model without reversal potentials is always above 0.5, no matter how strong the inhibitory inputs. When the correlation coefficient is greater than 0.05, CV for the integrateand-fire model with reversal potentials is always above 0.5, independent of the strength of the inhibitory inputs. Under a given condition on correlation coefficients, we find that correlated Poisson processes can be decomposed into independent Poisson processes. We also develop a novel method to estimate the distribution density of the first passage time of the integrate-and-fire model.

Journal ArticleDOI
TL;DR: In this paper, a new approach for exploring the stochastic structure of clouds is proposed using a direct relation between number density variance and the pair correlation function, which is shown to agree with pair correlation functions calculated for droplet counts obtained from an aircraft-mounted cloud probe.
Abstract: Recent studies have led to the statistical characterization of the spatial (temporal) distributions of cloud (precipitation) particles as a doubly stochastic Poisson process. This paper arrives at a similar conclusion (larger-than-Poissonian variance) via the more fundamental route of statistical physics and significantly extends previous findings in several ways. The focus is on the stochastic structure in the spatial distribution of cloud particles. A new approach for exploring the stochastic structure of clouds is proposed using a direct relation between number density variance and the pair correlation function. In addition, novel counting diagrams, particularly useful for analyzing counts at low data rates, demonstrate droplet clustering and striking deviations from Poisson randomness on small (centimeter) scales. These findings are shown to agree with pair correlation functions calculated for droplet counts obtained from an aircraft-mounted cloud probe. Time series of the arrival of each dro...

Journal ArticleDOI
TL;DR: In this article, the authors compared the performance of different methods of measuring the complex Poisson's ratio of viscoelastic materials as a function of frequency with special respect to the accuracy of determining the relevant loss factor.

Journal ArticleDOI
Ward Whitt1
TL;DR: By exploiting an infinite-server-model lower bound, it is shown that the tails of the steady-state and transient waiting-time distributions in the M/GI/s queue with unlimited waiting room and the first-come first-served discipline are bounded below by tails of Poisson distributions.
Abstract: By exploiting an infinite-server-model lower bound, we show that the tails of the steady-state and transient waiting-time distributions in the M/GI/s queue with unlimited waiting room and the first-come first-served discipline are bounded below by tails of Poisson distributions. As a consequence, the tail of the steady-state waiting-time distribution is bounded below by a constant times the sth power of the tail of the service-time stationary-excess distribution. We apply that bound to show that the steady-state waiting-time distribution has a heavy tail (with appropriate definition) whenever the service-time distribution does. We also establish additional results that enable us to nearly capture the full asymptotics in both light and heavy traffic. The difference between the asymptotic behavior in these two regions shows that the actual asymptotic form must be quite complicated.