scispace - formally typeset
Search or ask a question

Showing papers on "Poisson distribution published in 2018"


Journal ArticleDOI
TL;DR: Log-binomial and robust (modified) Poisson regression models are popular approaches to estimate risk ratios for binary response variables but their performance under model misspecification is poorly understood.
Abstract: Log-binomial and robust (modified) Poisson regression models are popular approaches to estimate risk ratios for binary response variables. Previous studies have shown that comparatively they produce similar point estimates and standard errors. However, their performance under model misspecification is poorly understood. In this simulation study, the statistical performance of the two models was compared when the log link function was misspecified or the response depended on predictors through a non-linear relationship (i.e. truncated response). Point estimates from log-binomial models were biased when the link function was misspecified or when the probability distribution of the response variable was truncated at the right tail. The percentage of truncated observations was positively associated with the presence of bias, and the bias was larger if the observations came from a population with a lower response rate given that the other parameters being examined were fixed. In contrast, point estimates from the robust Poisson models were unbiased. Under model misspecification, the robust Poisson model was generally preferable because it provided unbiased estimates of risk ratios.

260 citations


MonographDOI
31 Mar 2018
TL;DR: Campbell’s formula for marked point processes, Campbell-Mecke theorem, and Cox point process convergence counting measure are cited.
Abstract: Achieve faster and more efficient network design and optimization with this comprehensive guide. Some of the most prominent researchers in the field explain the very latest analytic techniques and results from stochastic geometry for modelling the signal-to-interference-plus-noise ratio (SINR) distribution in heterogeneous cellular networks. This book will help readers to understand the effects of combining different system deployment parameters on key performance indicators such as coverage and capacity, enabling the efficient allocation of simulation resources. In addition to covering results for network models based on the Poisson point process, this book presents recent results for when non-Poisson base station configurations appear Poisson, due to random propagation effects such as fading and shadowing, as well as non-Poisson models for base station configurations, with a focus on determinantal point processes and tractable approximation methods. Theoretical results are illustrated with practical Long-Term Evolution (LTE) applications and compared with real-world deployment results.

126 citations


Journal ArticleDOI
TL;DR: In this article, two new 2D reentrant topologies with negative Poisson's ratio are presented and their mechanical properties (Poisson ratio and energy absorption capacity) are studied using finite element method as a function of geometric parameters.
Abstract: In this paper, two new 2D re-entrant topologies with negative Poisson’s ratio are presented and their mechanical properties (Poisson’s ratio and energy absorption capacity) are studied using finite element method as a function of geometric parameters. The first topology (model 1) was constructed by adding two sinusoidal-shaped ribs into the classical re-entrant topology, while the second topology (model 2) was made by introducing extra vertical ribs to reinforce the sinusoidal-shaped ribs. Simulation results show that model 1 and model 2 topologies can reach a minimum value in Poisson’s ratio of − 1.12 and − 0.58 with an appropriate geometric aspect ratio, respectively. The energy absorption capacities of model 1, model 2 and classical re-entrant model were studied at various compression velocities. Enhanced energy absorption capacities were observed in the two new re-entrant topologies compared with the classical re-entrant topology. Model 2 exhibited the highest energy absorption capacity and a highest plateau stress. The plateau stress of model 1 was about half that of model 2, and when the compression velocity is more than 20 m/s, the plateau stress of model 1 became lower than that of the classical re-entrant model.

126 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived the coverage probability of a typical receiver, which is an arbitrarily chosen receiving node, assuming independent Nakagami-$m$ fading over all wireless channels.
Abstract: In this paper, we consider a vehicular network in which the wireless nodes are located on a system of roads. We model the roadways, which are predominantly straight and randomly oriented, by a Poisson line process (PLP) and the locations of nodes on each road as a homogeneous 1D Poisson point process. Assuming that each node transmits independently, the locations of transmitting and receiving nodes are given by two Cox processes driven by the same PLP. For this setup, we derive the coverage probability of a typical receiver, which is an arbitrarily chosen receiving node, assuming independent Nakagami- $m$ fading over all wireless channels. Assuming that the typical receiver connects to its closest transmitting node in the network, we first derive the distribution of the distance between the typical receiver and the serving node to characterize the desired signal power. We then characterize coverage probability for this setup, which involves two key technical challenges. First, we need to handle several cases as the serving node can possibly be located on any line in the network and the corresponding interference experienced at the typical receiver is different in each case. Second, conditioning on the serving node imposes constraints on the spatial configuration of lines, which requires careful analysis of the conditional distribution of the lines. We address these challenges in order to characterize the interference experienced at the typical receiver. We then derive an exact expression for coverage probability in terms of the derivative of Laplace transform of interference power distribution. We analyze the trends in coverage probability as a function of the network parameters: line density and node density. We also provide some theoretical insights by studying the asymptotic characteristics of coverage probability.

113 citations


Journal ArticleDOI
TL;DR: In this article, a unit cell model mimicking tensile tests is established and based on the proposed model, the secant Poisson's ratio is defined as the negative ratio between the lateral and the longitudinal engineering strains.
Abstract: This paper presents a systematic approach for designing 3D auxetic lattice materials, which exhibit constant negative Poisson’s ratios over large strain intervals. A unit cell model mimicking tensile tests is established and based on the proposed model, the secant Poisson’s ratio is defined as the negative ratio between the lateral and the longitudinal engineering strains. The optimization problem for designing a material unit cell with a target Poisson’s ratio is formulated to minimize the average lateral engineering stresses under the prescribed deformations. Numerical results demonstrate that 3D auxetic lattice materials with constant Poisson’s ratios can be achieved by the proposed optimization formulation and that two sets of material architectures are obtained by imposing different symmetry on the unit cell. Moreover, inspired by the topology-optimized material architecture, a subsequent shape optimization is proposed by parametrizing material architectures using super-ellipsoids. By designing two geometrical parameters, simple optimized material microstructures with different target Poisson’s ratios are obtained. By interpolating these two parameters as polynomial functions of Poisson’s ratios, material architectures for any Poisson’s ratio in the interval of ν ∈ [ − 0.78 , 0.00 ] are explicitly presented. Numerical evaluations show that interpolated auxetic lattice materials exhibit constant Poisson’s ratios in the target strain interval of [0.00, 0.20] and that 3D auxetic lattice material architectures with programmable Poisson’s ratio are achievable.

102 citations


Journal ArticleDOI
TL;DR: Deterministic routes to soft architected materials that can be tailored precisely to yield the values of Poisson's ratio in the range from -1 to 1 are introduced, in an isotropic manner, with a tunable strain range from 0% to ∼90%.
Abstract: Auxetic materials with negative Poisson's ratios have important applications across a broad range of engineering areas, such as biomedical devices, aerospace engineering and automotive engineering. A variety of design strategies have been developed to achieve artificial auxetic materials with controllable responses in the Poisson's ratio. The development of designs that can offer isotropic negative Poisson's ratios over large strains can open up new opportunities in emerging biomedical applications, which, however, remains a challenge. Here, we introduce deterministic routes to soft architected materials that can be tailored precisely to yield the values of Poisson's ratio in the range from -1 to 1, in an isotropic manner, with a tunable strain range from 0% to ∼90%. The designs rely on a network construction in a periodic lattice topology, which incorporates zigzag microstructures as building blocks to connect lattice nodes. Combined experimental and theoretical studies on broad classes of network topologies illustrate the wide-ranging utility of these concepts. Quantitative mechanics modeling under both infinitesimal and finite deformations allows the development of a rigorous design algorithm that determines the necessary network geometries to yield target Poisson ratios over desired strain ranges. Demonstrative examples in artificial skin with both the negative Poisson's ratio and the nonlinear stress-strain curve precisely matching those of the cat's skin and in unusual cylindrical structures with engineered Poisson effect and shape memory effect suggest potential applications of these network materials.

101 citations


Journal ArticleDOI
01 Feb 2018-Ecology
TL;DR: A large-scale screening test with 137 bird data sets from 2,037 sites found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets.
Abstract: Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help.

100 citations


Journal ArticleDOI
TL;DR: The meta distribution provides fine-grained information on the signal-to-interference ratio (SIR) compared with the SIR distribution at the typical user and the worst-case user and insights on the benefits of different cooperation schemes and the impact of the number of cooperating base stations and other network parameters are gained.
Abstract: The meta distribution provides fine-grained information on the signal-to-interference ratio (SIR) compared with the SIR distribution at the typical user. This paper first derives the meta distribution of the SIR in heterogeneous cellular networks with downlink coordinated multipoint transmission/reception, including joint transmission (JT), dynamic point blanking (DPB), and dynamic point selection/dynamic point blanking (DPS/DPB), for the general typical user and the worst-case user (the typical user located at the Voronoi vertex in a single-tier network). A more general scheme called JT-DPB, which is the combination of JT and DPB, is studied. The moments of the conditional success probability are derived for the calculation of the meta distribution and the mean local delay. An exact analytical expression, the beta approximation, and simulation results of the meta distribution are provided. From the theoretical results, we gain insights on the benefits of different cooperation schemes and the impact of the number of cooperating base stations and other network parameters.

81 citations


Journal ArticleDOI
TL;DR: In this paper, a quasi-coefficient of variation (QCV) was proposed to diagnose potential bias associated with unidirectional trends in abundance or detectability using Poisson regression.

73 citations


Journal ArticleDOI
TL;DR: This paper aims to comprehensively establish a relationship between mean speed, speed variation and traffic crashes for the purpose of formulating effective speed management measures, specifically using an urban dataset.

72 citations


Journal ArticleDOI
TL;DR: In this article, the statistical properties of the Poisson line Cox point process are analyzed for modeling of vehicular networks, including the general quadratic position of the points, the nearest distance distribution, the Laplace functional, the densities of facets of the Cox-Voronoi tessellation, and the asymptotic behavior of the typical Voronoi cell under vehicular densification.
Abstract: This paper analyzes statistical properties of the Poisson line Cox point process useful in the modeling of vehicular networks. The point process is created by a two-stage construction: a Poisson line process to model road infrastructure and independent Poisson point processes, conditionally on the Poisson lines, to model vehicles on the roads. We derive basic properties of the point process, including the general quadratic position of the points, the nearest distance distribution, the Laplace functional, the densities of facets of the Cox–Voronoi tessellation, and the asymptotic behavior of the typical Voronoi cell under vehicular densification. These properties are closely linked to features that are important in vehicular networks.

Posted Content
TL;DR: In this paper, a formal estimation procedure for parameters of the fractional Poisson process (fPp) is proposed to make the fPp model more flexible by permitting non-exponential, heavy-tailed distributions of interarrival times and different scaling properties.
Abstract: The paper proposes a formal estimation procedure for parameters of the fractional Poisson process (fPp). Such procedures are needed to make the fPp model usable in applied situations. The basic idea of fPp, motivated by experimental data with long memory is to make the standard Poisson model more flexible by permitting non-exponential, heavy-tailed distributions of interarrival times and different scaling properties. We establish the asymptotic normality of our estimators for the two parameters appearing in our fPp model. This fact permits construction of the corresponding confidence intervals. The properties of the estimators are then tested using simulated data.

Journal ArticleDOI
TL;DR: A variational model for phase retrieval based on a total variation regularization as an image prior and maximum a posteriori estimation of a Poisson noise model is proposed, referred to as “TV-PoiPR” and an efficient numerical algorithm based on an alternating direction method of multipliers is proposed and established.
Abstract: Phase retrieval plays an important role in vast industrial and scientific applications. We consider a noisy phase retrieval problem in which the magnitudes of the Fourier transform (or a general linear transform) of an underling object are corrupted by Poisson noise, since any optical sensors detect photons, and the number of detected photons follows the Poisson distribution. We propose a variational model for phase retrieval based on a total variation regularization as an image prior and maximum a posteriori estimation of a Poisson noise model, which is referred to as “TV-PoiPR”. We also propose an efficient numerical algorithm based on an alternating direction method of multipliers and establish its convergence. Extensive experiments for coded diffraction, holographic, and ptychographic patterns are conducted using both real- and complex-valued images to demonstrate the effectiveness of our proposed methods.

Journal ArticleDOI
TL;DR: In this article, a new analytical model is developed for three types of 2D periodic star-shaped reentrant lattice structures that possess the orthotropic symmetry and exhibit negative Poisson's ratios.

Journal ArticleDOI
TL;DR: Rule of mixture and percolation theory were used and its dependence on relative density was successfully modelled using the usual power law function with characteristic exponent of 1.1 to confirm the obtained experimental results for Poisson's ratio are valid.
Abstract: A nondestructive impulse excitation technique was used to investigate Poisson's ratio of powder metallurgical pure closed-cell aluminium foams according to ASTM E 1876 within the foam density range of 0.430⁻1.390 g·cm-3. Instead of a constant value of 0.34, as according to Gibson and Ashby's assumption for the Poisson's ratio of metallic foams, the decrease of the Poisson's ratio with decreasing foam density was observed. Observed Poisson's ratio data were in the range of 0.21⁻0.34. To check the validity of the results, the Young's modulus was calculated using Poisson's ratio and its dependence on relative density was successfully modelled using the usual power law function with characteristic exponent of 1.72 ± 0.1. This confirms that the obtained experimental results for Poisson's ratio are valid. Finally, rule of mixture and percolation theory were used to model the observed decrease of Poisson's ratio with increasing porosity.

BookDOI
03 Sep 2018
TL;DR: The author explains the development of Cox Regression and some of the principles behind its application to eha and survival.
Abstract: Preface Event History and Survival Data Introduction Survival Data Right Censoring Left Truncation Time Scales Event History Data More Data Sets Single Sample Data Introduction Continuous Time Model Descriptions Discrete Time Models Nonparametric Estimators Doing it in R Cox Regression Introduction Proportional Hazards The Log-Rank Test Proportional Hazards in Continuous Time Estimation of the Baseline Hazard Explanatory Variables Interactions Interpretation of Parameter Estimates Proportional Hazards in Discrete Time Model Selection Male Mortality Poisson Regression Introduction The Poisson Distribution The Connection to Cox Regression The Connection to the Piecewise Constant Hazards Model Tabular Lifetime Data More on Cox Regression Introduction Time-Varying Covariates Communal covariates Tied Event Times Stratification Sampling of Risk Sets Residuals Checking Model Assumptions Fixed Study Period Survival Left- or Right-Censored Data Parametric Models Introduction Proportional Hazards Models Accelerated Failure Time Models Proportional Hazards or AFT Model? Discrete Time Models Multivariate Survival Models Introduction Frailty Models Parametric Frailty Models Stratification Competing Risks Models Introduction Some Mathematics Estimation Meaningful Probabilities Regression R Code for Competing Risks Causality and Matching Introduction Philosophical Aspects of Causality Causal Inference Aalen's Additive Hazards Model Dynamic Path Analysis Matching Conclusion Basic Statistical Concepts Introduction Statistical Inference Asymptotic theory Model Selection Survival Distributions Introduction Relevant Distributions in R Parametric Proportional Hazards and Accelerated Failure Time Models A Brief Introduction to R R in General Some Standard R Functions Writing Functions Graphics Probability Functions Help in R Functions in eha and survival Reading Data into R Survival Packages in R Introduction eha survival Other Packages Bibliography Index

Journal ArticleDOI
TL;DR: In this paper, a new class of discrete generalized linear models based on the class of Poisson-Tweedie factorial dispersion models with variance of the form μ+ϕμp, where μ is the mean and ϕ and p are t...
Abstract: We propose a new class of discrete generalized linear models based on the class of Poisson–Tweedie factorial dispersion models with variance of the form μ+ϕμp, where μ is the mean and ϕ and p are t...

Journal ArticleDOI
TL;DR: This paper shows that the issue of infrared (IR) spectrum degradation can be considered as a maximum a posterior (MAP) problem and solved by minimized a cost function that includes a likelihood term and two prior terms.
Abstract: An FTIR spectrometer often suffers from common problems of band overlap and Poisson noises. In this paper, we show that the issue of infrared (IR) spectrum degradation can be considered as a maximum a posterior (MAP) problem and solved by minimized a cost function that includes a likelihood term and two prior terms. In the MAP framework, the likelihood probability density function (PDF) is constructed based on the observed Poisson noise model. A fitted distribution of curvelet transform coefficient is used as spectral prior PDF, and the instrument response function (IRF) prior is described based on a Gauss-Markov function. Moreover, the split Bregman iteration method is employed to solve the resulting minimization problem, which highly reduces the computational load. As a result, the Poisson noises are perfectly removed, while the spectral structure information is well preserved. The novelty of the proposed method lies in its ability to estimate the IRF and latent spectrum in a joint framework, thus eliminating the degradation effects to a large extent. The reconstructed IR spectrum is more convenient for extracting the spectral feature and interpreting the unknown chemical or biological materials.

Journal ArticleDOI
Zeyao Chen1, Zhe Wang1, Shiwei Zhou2, Jianwang Shao1, Xian Wu1 
TL;DR: Three types of novel lattices with negative Poisson’s ratio are proposed to improve not only stiffness and strength but also energy absorption capacity by embedding different ribs into a classic re-entrant structure to signify the prospect of lattices in enhancing engineering-applicable structures.
Abstract: The weak stiffness and strength of materials with negative Poisson’s ratio limits their application. In this paper, three types of novel lattices with negative Poisson’s ratio are proposed to improve not only stiffness and strength but also energy absorption capacity by embedding different ribs into a classic re-entrant structure. Unit cell analyses show these novel lattices have significantly increased Young’s modulus along the loading direction, and Type C can maintain sufficient negative Poisson’s ratio performance compared with the base lattice. In addition, the novel lattices exhibit higher yield stress, plateau stress and densification strain extracted from quasi-static compressive simulation. The lattices are prototyped by laser-based additive manufacturing and tested in quasi-static experiments, which show the experimental data match the numerical results within an error of margin. The work signifies the prospect of lattices with negative Poisson’s ratio in enhancing engineering-applicable structures, and indicates the potential of structural topology optimization in more sophisticated designs.

Journal ArticleDOI
TL;DR: A more flexible family of models is proposed that can account for a more diverse set of mean-variance relationships and it is demonstrated that a majority of neurons in a V1 population are better described by a model with a nonquadratic relationship between mean and variance.
Abstract: Neurons in many brain areas exhibit high trial-to-trial variability, with spike counts that are overdispersed relative to a Poisson distribution. Recent work (Goris, Movshon, & Simoncelli, 2014) ha...

Journal ArticleDOI
TL;DR: It is proved that starting from a uniform vertex both accelerates mixing to $O(\log n)$ and concentrates it (the cutoff phenomenon occurs), and analogous results are given for graphs with prescribed degree sequences, where cutoff is shown both for the simple and for the non-backtracking random walk.
Abstract: We study random walks on the giant component of the Erdős–Renyi random graph G(n,p)G(n,p) where p=λ/np=λ/n for λ>1λ>1 fixed. The mixing time from a worst starting point was shown by Fountoulakis and Reed, and independently by Benjamini, Kozma and Wormald, to have order log2nlog2⁡n. We prove that starting from a uniform vertex (equivalently, from a fixed vertex conditioned to belong to the giant) both accelerates mixing to O(logn)O(log⁡n) and concentrates it (the cutoff phenomenon occurs): the typical mixing is at (νd)−1logn±(logn)1/2+o(1)(νd)−1log⁡n±(log⁡n)1/2+o(1), where νν and dd are the speed of random walk and dimension of harmonic measure on a Poisson(λ)Poisson⁡(λ)-Galton–Watson tree. Analogous results are given for graphs with prescribed degree sequences, where cutoff is shown both for the simple and for the nonbacktracking random walk.

Journal ArticleDOI
TL;DR: In this article, the authors developed computationally efficient graphical goodness of fit checks and measures of overdispersion for binomial N-mixture models and evaluated the ability of the checks to detect lack of fit, and how lack of fitting affects estimates of abundances.
Abstract: Binomial N-mixture models are commonly applied to analyze population survey data. By estimating detection probabilities, N-mixture models aim at extracting information about abundances in terms of actual and not just relative numbers. This separation of detection probability and abundance relies on parametric assumptions about the distribution of individuals among sites and of detections of individuals among repeat visits to sites. Current methods for checking assumptions are limited, and their computational complexity have hindered evaluations of their performances. We develop computationally efficient graphical goodness of fit checks and measures of overdispersion for binomial N-mixture models. These checks are illustrated in a case study, and evaluated in simulations under two scenarios. The two scenarios assume overdispersion in the abundance distribution via a negative binomial distribution or in the detection probability via a beta-binomial distribution. We evaluate the ability of the checks to detect lack of fit, and how lack of fit affects estimates of abundances. The simulations show that if the parametric assumptions are incorrect there can be severe biases in estimated abundances: negatively if there is overdispersion in abundance relative to the fitted model and positively if there is overdispersion in detection. Our goodness of fit checks performed well in detecting lack of fit when the abundance distribution is overdispersed, but struggled to detect lack of fit when detections were overdispersed. We show that the inability to detect lack of fit due to overdispersed detection is caused by a fundamental similarity between N-mixture models with beta-binomial detections and N-mixture models with negative binomial abundances. The strong biases in estimated abundances that can occur in the binomial N-mixture model when the distribution of individuals among sites, or the detection model, is mis-specified implies that checking goodness of fit is essential for sound inference in ecological studies that use these methods. To check the assumptions we provide computationally efficient goodness of fit checks that are available in an R-package nmixgof. However, even when a binomial N-mixture model appears to fit the data well, estimates are not robust in the presence of overdispersion unless additional information about detection is collected.

Journal ArticleDOI
TL;DR: In this article, the phase transition of random radii Poisson Boolean percolation is studied for unbounded (and possibly heavy tailed) radii distributions. But the authors only consider the case where the radii are uniformly bounded from above.
Abstract: We study the phase transition of random radii Poisson Boolean percolation: Around each point of a planar Poisson point process, we draw a disc of random radius, independently for each point. The behavior of this process is well understood when the radii are uniformly bounded from above. In this article, we investigate this process for unbounded (and possibly heavy tailed) radii distributions. Under mild assumptions on the radius distribution, we show that both the vacant and occupied sets undergo a phase transition at the same critical parameter $$\lambda _c$$ . Moreover, The techniques we develop in this article can be applied to other models such as the Poisson Voronoi and confetti percolation.

Journal ArticleDOI
TL;DR: By considering the distribution studied in this article, a new regression model where the response variable is a count is introduced and residuals are also proposed for the new count regression model.

Journal ArticleDOI
TL;DR: In this paper, different estimation procedures for the parameters of the Poisson-exponential distribution, such as the maximum likelihood, method of moments, modified moments, ordinary and weighted least-squares, percentile, maximum product of spacings, Cramer-von Mises and the Anderson-Darling maximum goodness-of-fit estimators and compare them using extensive numerical simulations are presented.
Abstract: In this study, we present different estimation procedures for the parameters of the Poisson–exponential distribution, such as the maximum likelihood, method of moments, modified moments, ordinary and weighted least-squares, percentile, maximum product of spacings, Cramer–von Mises and the Anderson–Darling maximum goodness-of-fit estimators and compare them using extensive numerical simulations. We showed that the Anderson–Darling estimator is the most efficient for estimating the parameters of the proposed distribution. Our proposed methodology was also illustrated in three real data sets related to the minimum, average and the maximum flows during October at Sao Carlos River in Brazil demonstrating that the PE distribution is a simple alternative to be used in hydrological applications.

Journal ArticleDOI
TL;DR: In this article, a low-dimensional edge-based model of ordinary differential equations incorporating arbitrary heterogeneous number of individual contacts is formulated, and the basic reproduction number for global and local spreading pathways is derived.
Abstract: This paper concerns the SIR dynamics with two types of spreading mechanism, i.e., local spreading through network contacts and global spreading through casual contacts. A low-dimensional edge-based model of ordinary differential equations incorporating arbitrary heterogeneous number of individual contacts is formulated. The basic reproduction number $$R_{0}$$ is obtained, and on networks of Poisson type, it is the sum of the basic reproduction numbers for global and local spreading pathways; however, on networks of other type, it is a nonlinear function of the basic reproduction numbers for global and local spreading pathways. To measure the control efforts imposed on one specific transmission pathway, type reproduction numbers for global and local transmission pathways are calculated, respectively. Equations of the final epidemic size are analytically derived. Finally, the numerical solutions to our model are compared with the ensemble averages of the stochastic simulations. Simulations have shown that casual contacts in the population may trigger large stochastic fluctuations, which may cause huge variances around their mean; thus, in this scenario the ensemble mean is not a good representation of the behavior of the stochastic epidemic process. However, increasing the local infection rate or the connectedness of networks yields better predictions. The results presented provide insights in setting a framework for the analysis and containment of multiple routes of epidemic transmission in reality.

Posted Content
TL;DR: The core of the algorithm is based on a general framework to estimate any ML model with multiple fixed effects, which outperforms existing methods in terms of computing time or, in the Gaussian case, is on a par with the most efficient ones.
Abstract: Fixed-effect models are widely used econometric methods. This paper presents the R package FENmlm, which is devoted to the estimation of maximum likelihood (ML) models with any number of fixed-effects. The core of the algorithm, detailed in the paper, is based on a general framework to estimate any ML model with multiple fixed effects. It also integrates a fixed-point acceleration method to hasten the convergence of the fixed-effect coefficients. The R function offers the user a simple way to estimate any of four different maximum likelihood models: Poisson, Negative Binomial, Gaussian and Logit. Illustrations with real data detail the estimation process as well as the clustering of standard-errors and the various tools to export and manage results from multiple estimations. Simulations show that the algorithm outperforms existing methods in terms of computing time (often by orders of magnitude) or, in the Gaussian case, is on a par with the most efficient ones. Most interestingly, apart from the Gaussian case, the algorithm is revealed to be the only able to estimate models with many fixed-effects on a simple laptop. FENmlm is a free software and distributed under the general public license, as part of the R software project.

Journal ArticleDOI
TL;DR: In this article, a notion of exponential families for CRMs, which are called exponential CRM likelihoods, was introduced, allowing automatic Bayesian nonparametric conjugate priors for exponential CRMs.
Abstract: We demonstrate how to calculate posteriors for general Bayesian nonparametric priors and likelihoods based on completely random measures (CRMs). We further show how to represent Bayesian nonparametric priors as a sequence of finite draws using a size-biasing approach – and how to represent full Bayesian nonparametric models via finite marginals. Motivated by conjugate priors based on exponential family representations of likelihoods, we introduce a notion of exponential families for CRMs, which we call exponential CRMs. This construction allows us to specify automatic Bayesian nonparametric conjugate priors for exponential CRM likelihoods. We demonstrate that our exponential CRMs allow particularly straightforward recipes for size-biased and marginal representations of Bayesian nonparametric models. Along the way, we prove that the gamma process is a conjugate prior for the Poisson likelihood process and the beta prime process is a conjugate prior for a process we call the odds Bernoulli process. We deliver a size-biased representation of the gamma process and a marginal representation of the gamma process coupled with a Poisson likelihood process.

Journal ArticleDOI
TL;DR: In this article, the significance of the new source or effect can be estimated with a widely used formula from Li & Ma, which assumes that both measurements are Poisson random variables.
Abstract: Several experiments in high-energy physics and astrophysics can be treated as on/off measurements, where an observation potentially containing a new source or effect ("on" measurement) is contrasted with a background-only observation free of the effect ("off" measurement). In counting experiments, the significance of the new source or effect can be estimated with a widely used formula from Li & Ma, which assumes that both measurements are Poisson random variables. In this paper we study three other cases: (i) the ideal case where the background measurement has no uncertainty, which can be used to study the maximum sensitivity that an instrument can achieve, (ii) the case where the background estimate b in the off measurement has an additional systematic uncertainty, and (iii) the case where b is a Gaussian random variable instead of a Poisson random variable. The latter case applies when b comes from a model fitted on archival or ancillary data, or from the interpolation of a function fitted on data surrounding the candidate new source/effect. Practitioners typically use a formula that is only valid when b is large and when its uncertainty is very small, while we derive a general formula that can be applied in all regimes. We also develop simple methods that can be used to assess how much an estimate of significance is sensitive to systematic uncertainties on the efficiency or on the background. Examples of applications include the detection of short gamma-ray bursts and of new X-ray or γ-ray sources. All the techniques presented in this paper are made available in a Python code that is ready to use.

Journal ArticleDOI
TL;DR: In this article, the Vlasov-Poisson-Fokker-Planck system with uncertainty and multiple scales is studied, where the uncertainty, modeled by random variables, enters the solution through initial data.
Abstract: We study the Vlasov--Poisson--Fokker--Planck system with uncertainty and multiple scales. Here the uncertainty, modeled by random variables, enters the solution through initial data, while the mult...