scispace - formally typeset
Search or ask a question

Showing papers on "Poisson distribution published in 1998"


Journal ArticleDOI
TL;DR: In this article, a classical confidence belt construction is proposed to unify the treatment of upper confidence limits for null results and two-sided confidence intervals for non-null results for Gaussian processes with background and Gaussian errors with a bounded physical region.
Abstract: We give a classical confidence belt construction which unifies the treatment of upper confidence limits for null results and two-sided confidence intervals for non-null results. The unified treatment solves a problem (apparently not previously recognized) that the choice of upper limit or two-sided intervals leads to intervals which are not confidence intervals if the choice is based on the data. We apply the construction to two related problems which have recently been a battle-ground between classical and Bayesian statistics: Poisson processes with background, and Gaussian errors with a bounded physical region. In contrast with the usual classical construction for upper limits, our construction avoids unphysical confidence intervals. In contrast with some popular Bayesian intervals, our intervals eliminate conservatism (frequentist coverage greater than the stated confidence) in the Gaussian case and reduce it to a level dictated by discreteness in the Poisson case. We generalize the method in order to apply it to analysis of experiments searching for neutrino oscillations. We show that this technique both gives correct coverage and is powerful, while other classical techniques that have been used by neutrino oscillation search experiments fail one or both of these criteria.

2,830 citations


Book
23 Jan 1998
TL;DR: In this article, the authors present a CUSUM chart for a normal mean and compare it to the Shewhart Xbar chart for the same purpose, showing the effect of the change in the normal distribution.
Abstract: 1 Introduction.- 1.1 Common-cause and special-cause variability.- 1.2 Transient and persistent special causes.- 1.3 The Shewhart and CUSUM charts.- 1.4 Basis for the CUSUM chart for a normal mean.- 1.4.1 Statistical properties of the CUSUM.- 1.5 Out-of-control distribution of the CUSUM.- 1.6 Testing for a shift -the V mask.- 1.7 Estimation following a signal.- 1.8 Using individual readings or rational groups.- 1.9 The decision interval form of the CUSUM.- 1.9.1 Example.- 1.10 Summary.- 1.11 Further reading.- 2 CUSUM design.- 2.1 The choice of k and h.- 2.1.1 Reference value k - "tuning" for a specific shift.- 2.2 Runs, run length, and average run length.- 2.2.1 The choice of h, the decision interval.- 2.2.2 Calculating the k, h, ARL relationship.- 2.2.3 A closer look at the choice of in-control ARL.- 2.2.4 Designing a CUSUM of Xbar.- 2.3 The Shewhart Xbar chart as CUSUM.- 2.4 Summary.- 2.5 Further reading.- 3 More about normal data.- 3.1 In-control ARLs.- 3.2 Out-of-control ARLs.- 3.2.1 Model.- 3.2.2 The ARL following a shift in mean.- 3.3.3 ARL sensitivity to choice of K.- 3.2.4 Out-of-control states and two-sided CUSUMs.- 3.3 FIR CUSUMs: zero start and steady state start.- 3.3.1 Introduction.- 3.3.2 Out-of-control ARL of the FIR CUSUM.- 3.3.3 ARL of two-sided FIR CUSUMS.- 3.3.4 Initial and steady-state ARL.- 3.4 Controlling for the mean within a range.- 3.4.1 Example.- 3.5 The impact of variance shifts.- 3.5.1 Individual data -the approximate normal transform.- 3.5.2 Rational groups-variance CUSUMs.- 3.6 Combined Shewhart and CUSUM charts.- 3.6.1 Example.- 3.7 Effect of model departures.- 3.7.1 Nonnormality.- 3.7.2 Independence.- 3.8 Weighted CUSUMs.- 3.8.1 Example.- 3.9 Summary.- 3.10 Further reading.- 4 Other continuous distributions.- 4.1 The gamma family and normal variances.- 4.1.1 Background.- 4.1.2 Normal variances.- 4.1.3 Design of the CUSUM for scale.- 4.1.4 Example: Sugar bags.- 4.1.5 Shift in the gamma shape parameter ?.- 4.1.6 Example - shift in ss.- 4.2 The inverse Gaussian family.- 4.2.1 Background.- 4.2.2 Shift in mean.- 4.2.3 Shift in scale parameter.- 4.3 Example from General Motors.- 4.3.1 CUSUM chart for location.- 4.3.2 CUSUM chart for ?.- 4.3.3 Remarks.- 4.4 Comments.- 4.5 Further reading.- 5 Discrete data.- 5.1 Types of discrete data.- 5.1.1 Binomial data.- 5.1.2 Count data.- 5.2 The graininess of the ARL function.- 5.3 The Poisson distribution and count data.- 5.3.1 Useful properties of the Poisson distribution.- 5.4 The Poisson and CUSUMs.- 5.4.1 Design for an upward shift.- 5.4.2 Downward shift.- 5.4.3 ARLs.- 5.4.4 Example.- 5.4.5 The effect of departures from Poisson.- 5.4.6 Checking conformity to the Poisson model.- 5.5 Weighted Poisson CUSUMs.- 5.6 The binomial distribution.- 5.6.1 Background.- 5.6.2 Examples.- 5.6.3 The choice of m.- 5.7 Weighted binomial CUSUMs.- 5.7.1 Example.- 5.8 Other discrete distributions.- 5.9 Summary.- 5.10 Further reading.- 6 Theoretical foundations of the CUSUM.- 6.1 General theory.- 6.1.1 Relation of the SPRT to the CUSUM.- 6.1.2 Optimality properties.- 6.2 The general exponential family.- 6.2.1 Derivation of the CUSUM for a normal mean shift.- 6.2.2 The gamma family and normal variance.- 6.2.3 Relation to normal variances.- 6.2.4 The Poisson family.- 6.2.5 The binomial family.- 6.2.6 The negative binomial family.- 6.2.7 The inverse Gaussian family.- 6.2.8 The Weibull distribution.- 6.2.9 Distributions outside the exponential family.- 6.3 The Markov property of CUSUMs.- 6.4 Getting the ARL.- 6.4.1 The renewal equations.- 6.4.2 The Markov chain approach.- 6.4.3 Simulation using variance reduction techniques.- 6.5 Summary.- 6.6 Further reading.- 7 Calibration and short runs.- 7.1 The self-starting approach.- 7.2 The self-starting CUSUM for a normal mean.- 7.2.1 Special features of self-starting charts.- 7.3 Self-starting CUSUMs for gamma data.- 7.3.1 Background.- 7.3.2 The scheme.- 7.3.3 Example.- 7.3.4 Normal data - control of mean and variance.- 7.3.5 Comments.- 7.4 Discrete data.- 7.4.1 The Poisson distribution.- 7.4.2 The binomial distribution.- 7.4.3 Updating the targets.- 7.5 Summary.- 7.6 Further reading.- 8 Multivariate data.- 8.1 Outline of the multivariate normal.- 8.2 Shewhart charting-Hotelling's T2.- 8.3 CUSUM charting - various approaches.- 8.3.1 Collections of unvariate CUSUMs.- 8.4 Regression adjustment.- 8.4.1 Example.- 8.4.2 SPC use of regression-adjusted variables.- 8.4.3 Example - monitoring a carbide plant.- 8.5 Choice of regression adjustment.- 8.6 The use of several regression-adjusted variables.- 8.6.1 Example.- 8.7 The multivariate exponentially weighted moving average.- 8.8 Summary.- 8.9 Further reading.- 9 Special topics.- 9.1 Robust CUSUMs.- 9.2 Recursive residuals in regression.- 9.2.1 Definition and properties.- 9.2.2 Example.- 9.3 Autocorrelated data.- 9.3.1 Example.- 9.4 Summary.- 9.5 Further reading.- 9.5.1 Time series.- 9.5.2 Score methods.- 9.5.3 Robustification.- 9.5.4 Recursive residuals.- 10 Software.- 10.1 Programs and templates.- 10.2 Data files.- References.

653 citations


Journal ArticleDOI
26 Mar 1998-Nature
TL;DR: In this paper, the authors found that negative Poisson's ratio is rare in crystalline solids and showed that the existence of positive Poisson ratios up to the stability limit of 2 for cubic crystals in the orthogonal lateral direction.
Abstract: Poisson's ratio is, for specified directions, the ratio of a lateral contraction to the longitudinal extension during the stretching of a material Although a negative Poisson's ratio (that is, a lateral extension in response to stretching) is not forbidden by thermodynamics, this property is generally believed to be rare in crystalline solids1 In contrast to this belief, 69% of the cubic elemental metals have a negative Poisson's ratio when stretched along the [110] direction For these metals, we find that correlations exist between the work function and the extremal values of Poisson's ratio for this stretch direction, which we explain using a simple electron-gas model Moreover, these negative Poisson's ratios permit the existence, in the orthogonal lateral direction, of positive Poisson's ratios up to the stability limit of 2 for cubic crystals Such metals having negative Poisson's ratios may find application as electrodes that amplify the response of piezoelectric sensors

652 citations


Journal ArticleDOI
TL;DR: The results indicate that lumping host data can hide important variations in aggregation between hosts and can exaggerate the true degree of aggregation.
Abstract: Frequency distributions from 49 published wildlife host-macroparasite systems were analysed by maximum likelihood for goodness of fit to the negative binomial distribution. In 45 of the 49 (90 %) data-sets, the negative binomial distribution provided a statistically satisfactory fit. In the other 4 data-sets the negative binomial distribution still provided a better fit than the Poisson distribution, and only 1 of the data-sets fitted the Poisson distribution. The degree of aggregation was large, with 43 of the 49 data-sets having an estimated k of less than 1 From these 19 data-sets, 22 subsets of host data were available (i.e. host data could be divided by either host sex, age, where or when hosts were sampled). In 11 of these 22 subsets there was significant variation in the degree of aggregation between host subsets of the same host-parasite system. A common k estimate was always larger than that obtained with all the host data considered together. These results indicate that lumping host data can hide important variations in aggregation between hosts and can exaggerate the true degree of aggregation. Wherever possible common k estimates should be used to estimate the degree of aggregation. In addition, significant differences in the degree of aggregation between subgroups of host data, were generally associated with significant differences in both mean parasite burdens and the prevalence of infection.

427 citations


01 Jan 1998
TL;DR: In this paper, the authors propose a model for counting data that allows for excess zeros, which is the distinction between structural zeros which are inevitable, and sampling zero counts, which occur by chance.
Abstract: Poisson regression models provide a standard framework for the analysis of count data. In practice, however, count data are often overdispersed relative to the Poisson distribution. One frequent manifestation of overdispersion is that the incidence of zero counts is greater than expected for the Poisson distribution and this is of interest because zero counts frequently have special status. For example, in counting disease lesions on plants, a plant may have no lesions either because it is resistant to the disease, or simply because no disease spores have landed on it. This is the distinction between structural zeros, which are inevitable, and sampling zeros, which occur by chance. In recent years there has been considerable interest in models for count data that allow for excess zeros, particularly in the econometric literature. These models complement more conventional models for overdispersion that concentrate on modelling the variance-mean relationship correctly. Application areas are diverse and have included manufacturing defects (Lambert, 1992), patent applications (Crepon & Duguet, 1997), road safety (Miaou, 1994), species abundance (Welsh et al., 1996; Faddy, 1998), medical consultations

411 citations


Journal ArticleDOI
TL;DR: The general definition of equilibrium for games with population uncertainty is formulated, and it is shown that the equilibria of Poisson games are invariant under payoff-irrelevant type splitting.
Abstract: A general class of models is developed for analyzing games with population uncertainty. Within this general class, a special class of Poisson games is defined. It is shown that Poisson games are uniquely characterized by properties of independent actions and environmental equivalence. The general definition of equilibrium for games with population uncertainty is formulated, and it is shown that the equilibria of Poisson games are invariant under payoff-irrelevant type splitting. An example of a large voting game is discussed, to illustrate the advantages of using a Poisson game model for large games.

361 citations


01 Jan 1998
TL;DR: In this paper, a Bayesian inferential framework is proposed to solve the prediction problem for non-linear functions of a Gaussian spatial stochastic process S(x), making a proper allowance for the uncertainty in the estimation of any model parameters.
Abstract: Conventional geostatistical methodology solves the problem of predicting the realized value of a linear functional of a Gaussian spatial stochastic process S(x) based on observations Yi = S(xi) + Zi at sampling locations xi, where the Zi are mutually independent, zero-mean Gaussian random variables. We describe two spatial applications for which Gaussian distributional assumptions are clearly inappropriate. The first concerns the assessment of residual contamination from nuclear weapons testing on a South Pacific island, in which the sampling method generates spatially indexed Poisson counts conditional on an unobserved spatially varying intensity of radioactivity; we conclude that a conventional geostatistical analysis oversmooths the data and underestimates the spatial extremes of the intensity. The second application provides a description of spatial variation in the risk of campylobacter infections relative to other enteric infections in part of north Lancashire and south Cumbria. For this application, we treat the data as binomial counts at unit postcode locations, conditionally on an unobserved relative risk surface which we estimate. The theoretical framework for our extension of geostatistical methods is that, conditionally on the unobserved process S(x), observations at sample locations xi form a generalized linear model with the corresponding values of S(xi) appearing as an offset term in the linear predictor. We use a Bayesian inferential framework, implemented via the Markov chain Monte Carlo method, to solve the prediction problem for non-linear functionals of S(x), making a proper allowance for the uncertainty in the estimation of any model parameters.

353 citations


Journal ArticleDOI
TL;DR: A novel approach to stable noise modeling is introduced based on the LePage series representation and the results obtained are useful for the prediction of noise statistics in a wide range of environments with deterministic and stochastic power propagation laws.
Abstract: This paper addresses non-Gaussian statistical modeling of interference as a superposition of a large number of small effects from terminals/scatterers distributed in the plane/volume according to a Poisson point process. This problem is relevant to multiple access communication systems without power control and radar. Assuming that the signal strength is attenuated over distance r as 1/r/m, we show that the interference/clutter could be modeled as a spherically symmetric /spl alpha/-stable noise. A novel approach to stable noise modeling is introduced based on the LePage series representation. This establishes grounds to investigate practical constraints in the system model adopted, such as the finite number of interferers and nonhomogeneous Poisson fields of interferers. In addition, the formulas derived allow us to predict noise statistics in environments with lognormal shadowing and Rayleigh fading. The results obtained are useful for the prediction of noise statistics in a wide range of environments with deterministic and stochastic power propagation laws. Computer simulations are provided to demonstrate the efficiency of the /spl alpha/-stable noise model in multiuser communication systems. The analysis presented will be important in the performance evaluation of complex communication systems and in the design of efficient interference suppression techniques.

293 citations


Journal Article
TL;DR: In this paper, a Bayesian inferential framework, implemented via the Markov chain Monte Carlo method, is used to solve the prediction problem for non-linear functionals of S(×), making a proper allowance for the uncertainty in the estimation of any model parameters.
Abstract: Conventional geostatistical methodology solves the problem of predicting the realized value of a linear functional of a Gaussian spatial stochastic process S(x) based on observations (Y i = S(x i ;)+ Z i at sampling locations x i , where the Z i are mutually independent, zero-mean Gaussian random variables. We describe two spatial applications for which Gaussian distributional assumptions are clearly inappropriate. The first concerns the assessment of residual contamination from nuclear weapons testing on a South Pacific island, in which the sampling method generates spatially indexed Poisson counts conditional on an unobserved spatially varying intensity of radioactivity; we conclude that a conventional geostatistical analysis oversmooths the data and underestimates the spatial extremes of the intensity. The second application provides a description of spatial variation in the risk of campylobacter infections relative to other enteric infections in part of north Lancashire and south Cumbria. For this application, we treat the data as binomial counts at unit postcode locations, conditionally on an unobserved relative risk surface which we estimate. The theoretical framework for our extension of geostatistical methods is that, conditionally on the unobserved process S(×), observations at sample locations × i form a generalized linear model with the corresponding values of S(× i ) appearing as an offset term in the linear predictor. We use a Bayesian inferential framework, implemented via the Markov chain Monte Carlo method, to solve the prediction problem for non-linear functionals of S(×), making a proper allowance for the uncertainty in the estimation of any model parameters.

288 citations


Book
11 Sep 1998
TL;DR: In this paper, the authors present an expansion of the maximum likelihood and Bayesian estimators of the Poisson process and the distribution functions of Poisson processes, as well as the optimal choice of observation windows.
Abstract: 1 Auxiliary Results.- 1.1 Poisson process.- 1.2 Estimation problems.- 2 First Properties of Estimators.- 2.1 Asymptotic of the maximum likelihood and Bayesian estimators.- 2.2 Minimum distance estimation.- 2.3 Special models of Poisson processes.- 3 Asymptotic Expansions.- 3.1 Expansion of the MLE.- 3.2 Expansion of the Bayes estimator.- 3.3 Expansion of the minimum distance estimator.- 3.4 Expansion of the distribution functions.- 4 Nonstandard Problems.- 4.1 Misspecified model.- 4.2 Nonidentifiable model.- 4.3 Optimal choice of observation windows.- 4.4 Optimal choice of intensity function.- 5 The Change-Point Problems.- 5.1 Phase and frequency estimation.- 5.2 Chess-field problem.- 5.3 Top-hat problem.- 6 Nonparametric Estimation.- 6.1 Intensity measure estimation.- 6.2 Intensity function estimation.- Remarks.

225 citations


Proceedings ArticleDOI
04 Jan 1998
TL;DR: Four different methods for selecting thresholds that work on very different principles of either the noise or the signal is modelled and the model covers either the spatial or intensity distribution characteristics.
Abstract: Image differencing is used for many applications involving change detection. Although it is usually followed by a thresholding operation to isolate regions of change there are few methods available in the literature specific to (and appropriate for) change detection. We describe four different methods for selecting thresholds that work on very different principles. Either the noise or the signal is modelled, and the model covers either the spatial or intensity distribution characteristics. The methods are: 1) a Normal model is used for the noise intensity distribution, 2) signal intensities are tested by making local intensity distribution comparisons' in the two image frames (i.e. the difference map is not used), 3) the spatial properties of the noise are modelled by a Poisson distribution, and 4) the spatial properties of the signal are modelled as a stable number of regions (or stable Euler number).

Journal ArticleDOI
TL;DR: In this paper, the authors extended the Poisson model of games with population uncertainty by allowing that expected population sizes and players' utility functions may depend on an unknown state of the world.

Journal ArticleDOI
TL;DR: This paper presents several analytical depoissonization results that fall into the following general scheme: if the Poisson transform has an appropriate growth in the complex plane, then an asymptotic expansion of the sequence can be expressed in terms of thePoisson transform and its derivatives evaluated on the real line.

Journal ArticleDOI
TL;DR: In this article, an exponentially weighted moving average control chart for monitoring Poisson data is introduced, which is evaluated using a Markov chain approximation, and its average run length is compared to other procedures for Poisson Data.
Abstract: An exponentially weighted moving average control chart for monitoring Poisson data is introduced. The charting procedure is evaluated using a Markov chain approximation, and its average run length is compared to other procedures for Poisson data. Figure..

Journal ArticleDOI
TL;DR: Exponentially weighted moving average (EWMA) control charts are developed for monitoring the rate of occurrences of rare events based on the inter-arrival times of these events as mentioned in this paper, and the average run lengths of a EWMA charts are determined exactly based on a linear regression model.
Abstract: Exponentially weighted moving average (EWMA) control charts are developed for monitoring the rate of occurrences of rare events based on the interarrival times of these events. The average run lengths of a EWMA charts are determined exactly based on the..

Journal ArticleDOI
TL;DR: Although logistic was the easiest model to implement, it should be used only in occupational cohort studies when the outcome is rare, and the relative risk is less than approximately 2.5%, and the proportional hazards and Poisson models are better choices.
Abstract: This research was conducted to examine the effect of model choice on the epidemiologic interpretation of occupational cohort data. Three multiplicative models commonly employed in the analysis of occupational cohort studies--proportional hazards. Poisson, and logistic regression--were used to analyze data from an historical cohort study of workers exposed to formaldehyde. Samples were taken from this dataset to create a number of predetermined scenarios for comparing the models, varying study size, outcome frequency, strength of risk factors, and follow-up length. The Poisson and proportional hazards models yielded nearly identical relative risk estimates and confidence intervals in all situations except when confounding by age could not be closely controlled in the Poisson analysis. Logistic regression findings were more variable, with risk estimates differing most from the proportional hazards results when there was a common outcome or strong relative risk. The logistic model also provided less precise estimates than the other two. Thus, although logistic was the easiest model to implement, it should be used only in occupational cohort studies when the outcome is rare (5% or less), and the relative risk is less than approximately 2. Even then, the proportional hazards and Poisson models are better choices. Selecting between these two can be based on convenience in most circumstances.

Journal ArticleDOI
TL;DR: In this paper, a finite mixed Poisson regression model with covariates in both Poisson rates and mixing probabilities is used to analyze the relationship between patents and research and development spending at the firm level.
Abstract: Count-data models are used to analyze the relationship between patents and research and development spending at the firm level, accounting for overdispersion using a finite mixed Poisson regression model with covariates in both Poisson rates and mixing probabilities. Maximum likelihood estimation using the EM and quasi-Newton algorithms is discussed. Monte Carlo studies suggest that (a) penalized likelihood criteria are a reliable basis for model selection and can be used to determine whether continuous or finite support for the mixing distribution is more appropriate and (b) when the mixing distribution is incorrectly specified, parameter estimates remain unbiased but have inflated variances.

Journal ArticleDOI
TL;DR: In this paper, it was shown that for any positive function fon the discrete cube {0, 1}n,Entμnp(f)⩽pqEμnp1f|Df|2 whereμnpis the product measure of the Bernoulli measure with probability of successp, as well as related inequalities, which may be shown to imply in the limit the classical Gaussian logarithmic Sobolev inequality as well and a logarithermic Sobolerv inequality for Poisson measure.

Journal ArticleDOI
TL;DR: In this paper, the validity of the zero-inflated Poisson distribution has been evaluated in a variety of applications, including the analysis of count data in many areas of biometric interest.
Abstract: Analysis of count data is required in many areas of biometric interest. Often the simple Poisson distribution is not appropriate, since an extra-number of zero counts occur in the count data. Some current approaches for the problem at hand are reviewed. It will be argued that these situations can often be easily modeled using the zero-inflated Poisson distribution. A variety of applications are considered in which this occurs. Possibilities are outlined on how the validity of the zero-inflated Poisson can be validated including a comparison with the nonparametric Poisson mixture maximum likelihood estimator.

Book
10 Feb 1998
TL;DR: In this article, a method for studying distributions of functionals Gaussian functionals Poisson functionals Local limit theorems Bibliographical notes Bibliography Index and Bibliography index.
Abstract: Preliminaries Methods for studying distributions of functionals Gaussian functionals Poisson functionals Local limit theorems Bibliographical notes Bibliography Index.

Journal ArticleDOI
TL;DR: Two new approximations to the exact log-likelihood of the precorrected measurements are proposed, one based on a 'shifted Poisson' model, the other based on saddle-point approximation to the measurement of probability mass function (PMF).

Journal ArticleDOI
TL;DR: The authors examines two alternative specifications for estimating event-count models in which the data-generating process results in a larger number of zero counts than would be expected under standard event count models.
Abstract: This article examines two alternative specifications for estimating event-count models in which the data-generating process results in a larger number of zero counts than would be expected under st...

Journal ArticleDOI
TL;DR: In this article, the spaces of Poisson, compound Poisson and Gamma noises are studied as special cases of a general approach to non-Gaussian white noise calculus, see Ref. 18.
Abstract: We study the spaces of Poisson, compound Poisson and Gamma noises as special cases of a general approach to non-Gaussian white noise calculus, see Ref. 18. We use a known unitary isomorphism between Poisson and compound Poisson spaces in order to transport analytic structures from Poisson space to compound Poisson space. Finally we study a Fock type structure of chaos decomposition on Gamma space.

Journal ArticleDOI
TL;DR: In this paper, a conceptual stochastic model for rainfall, based on a Poisson-cluster process with rectangular pulses representing rain cells, is further developed, and a method for deriving high-order moments is applied to obtain the third-moment function for the model.
Abstract: A conceptual stochastic model for rainfall, based on a Poisson-cluster process with rectangular pulses representing rain cells, is further developed. A method for deriving high-order moments is applied to obtain the third-moment function for the model. This is used with second-order properties to fit the model to January and July time-series taken from a site in Wellington, New Zealand. It is found that the parameter estimates may follow two solution paths converging on an optimum value over a bounded interval. The parameter estimates are used with the model to simulate 200 years of hourly data, and parametric tests used to compare simulated and observed extreme rainfalls. These show good agreement over a range of sampling intervals. The paper concludes with a discussion of the standard errors of the model parameter estimates which are obtained using a non-parametric bootstrap.

Journal ArticleDOI
TL;DR: In this article, the Dirac structures for Poisson groupoids are classified in terms of Dirac structure for the corresponding Lie bialgebroids, and a description of leaf spaces of foliations as homogeneous spaces of pair groupoids is given.
Abstract: Poisson homogeneous spaces for Poisson groupoids are classified in terms of Dirac structures for the corresponding Lie bialgebroids. Applications include Drinfel'd's classification in the case of Poisson groups and a description of leaf spaces of foliations as homogeneous spaces of pair groupoids.

Proceedings ArticleDOI
08 Nov 1998
TL;DR: Several data weighting schemes with increasing complexity are examined which progressively account for attenuation, normalization, random coincidences and Compton scatter with both ordinary and shifted Poisson models to evaluate the importance of preserving the Poisson characteristics of PET data when using OSEM reconstruction.
Abstract: To evaluate the importance of preserving the Poisson characteristics of PET data when using OSEM reconstruction, the authors have examined several data weighting schemes with increasing complexity which progressively account for attenuation, normalization, random coincidences and Compton scatter with both ordinary and shifted Poisson models. All schemes have first been tested on 2D dynamic data from a decaying phantom filled with H/sub 2//sup 15/O. The images produced by the various weighting schemes were compared to the ones obtained by filtered backprojection reconstruction. Small positive bias (1%) was detected for the FW and SP schemes and may be due to implementation. The impact of these reconstruction methods on physiological parameters is illustrated with clinical protocols measuring myocardial blood flow with /sup 13/NH/sub 3/ and Standard Uptake Values with /sup 18/FDG. Despite the high random rate in cardiac studies, no positive bias was detected in the images when using the AW and ANW schemes with scans precorrected for random and scatter. For whole-body studies, at low random rates, the ANW scheme produced the same quantitative results as the FW and SP ones. That scheme is likely the best choice for obtaining high quality emission images and unbiased ROI's in most situations on current 2D scanners.

Journal ArticleDOI
TL;DR: Modeling a DNA sequence as a stationary Markov chain, it is shown as an application that the compound Poisson approximation is efficient for the number of occurrences of rare stem-loop motifs.
Abstract: We derive a Poisson process approximation for the occurrences of clumps of multiple words and a compound Poisson process approximation for the number of occurrences of multiple words in a sequence of letters generated by a stationary Markov chain. Using the Chen-Stein method, we provide a bound on the error in the approximations. For rare words, these errors tend to zero as the length of the sequence increases to infinity. Modeling a DNA sequence as a stationary Markov chain, we show as an application that the compound Poisson approximation is efficient for the number of occurrences of rare stem-loop motifs.

Journal ArticleDOI
TL;DR: In this paper, the authors developed the non-parametric mle for the entire probability model, namely for the total number N of species and the generating distribution F for the expected values of the species' abundances.
Abstract: The proper management of an ecological population is greatly aided by solid information about its species' abundances. For the general heterogeneous Poisson species abundance setting, we develop the non-parametric mle for the entire probability model, namely for the total number N of species and the generating distribution F for the expected values of the species' abundances. Solid estimation of the entire probability model allows us to develop generator-based measures of ecological diversity and evenness which have inferences over similar regions. Also, our methods produce a solid goodness-of-fit test for our model as well as a likelihood ratio test to examine if there is heterogeneity in the expected values of the species' abundances. These estimates and tests are examined, in detail, in the paper. In particular, we apply our methods to important data from the National Breeding Bird Survey and discuss how our methods can also be easily applied to sweep net sampling data. To further examine our methods, we provide simulations for several illustrative situations.

Book
01 Jan 1998
TL;DR: In this paper, the background is modelled as independent observations from an exponential family of distributions with a known 'null' value of the natural parameter, while the signal is given by independent observations with a different value on a particular subregion of the spatial domain.
Abstract: This paper is concerned with statistics that scan a multidimensional spatial region to detect a signal against a noisy background. The background is modelled as independent observations from an exponential family of distributions with a known 'null' value of the natural parameter, while the signal is given by independent observations from the same exponential family, but with a different value of the parameter on a particular subregion of the spatial domain. The main result is an extension to multidimensional time of the method of Pollak and Yakir, which relies on a change of measure motivated by change-point analysis, to evaluate approximately the null distribution of the likelihood ratio statistic. Both large-deviation and Poisson approximations are obtained.

Journal ArticleDOI
TL;DR: This work quantifies the loss that is incurred when time-division multiple access (TDMA) is employed and shows that while in the two-user case and in the absence of dark current the penalty is rather mild, the penalty can be quite severe in the many-users case in the presence of large dark current.
Abstract: The Poisson multiple-access channel (MAC) models many-to-one optical communication through an optical fiber or in free space. For this model we compute the capacity region for the two-user case as a function of the allowed peak power. Focusing on the maximum throughput we generalize our results to the case where the users are subjected to an additional average-power constraint and to the many-users case. We show that contrary to the Gaussian MAC, in the Poisson MAC the maximum throughput is bounded in the number of users. We quantify the loss that is incurred when time-division multiple access (TDMA) is employed and show that while in the two-user case and in the absence of dark current the penalty is rather mild, the penalty can be quite severe in the many-users case in the presence of large dark current. We introduce a generalized TDMA technique that mitigates this loss to a large extent.