scispace - formally typeset
Search or ask a question

Showing papers on "Poisson distribution published in 2016"


Journal ArticleDOI
TL;DR: In this paper, the meta distribution of the SIR is derived for Poisson bipolar and cellular networks with Rayleigh fading, and a simple approximation for it is provided for the point process.
Abstract: The calculation of the SIR distribution at the typical receiver (or, equivalently, the success probability of transmissions over the typical link) in Poisson bipolar and cellular networks with Rayleigh fading is relatively straightforward, but it only provides limited information on the success probabilities of the individual links This paper focuses on the meta distribution of the SIR, which is the distribution of the conditional success probability $P_{\rm{ s}}$ given the point process, and provides bounds, an exact analytical expression, and a simple approximation for it The meta distribution provides fine-grained information on the SIR and answers questions such as “What fraction of users in a Poisson cellular network achieve 90% link reliability if the required SIR is 5 dB?” Interestingly, in the bipolar model, if the transmit probability $p$ is reduced while increasing the network density $\lambda$ such that the density of concurrent transmitters $\lambda p$ stays constant as $p\rightarrow 0$ , $P_{\rm{ s}}$ degenerates to a constant, ie, all links have exactly the same success probability in the limit, which is the one of the typical link In contrast, in the cellular case, if the interfering base stations are active independently with probability $p$ , the variance of $P_{\rm{ s}}$ approaches a non-zero constant when $p$ is reduced to 0 while keeping the mean success probability constant

295 citations


Journal ArticleDOI
TL;DR: The inter-relationship between the underlying structure and a negative Poisson's ratio is discussed in functional materials, including macroscopic bulk, low-dimensional nanoscale particles, films, sheets, or tubes.
Abstract: Materials with negative Poisson's ratio attract considerable attention due to their underlying intriguing physical properties and numerous promising applications, particularly in stringent environments such as aerospace and defense areas, because of their unconventional mechanical enhancements. Recent progress in materials with a negative Poisson's ratio are reviewed here, with the current state of research regarding both theory and experiment. The inter-relationship between the underlying structure and a negative Poisson's ratio is discussed in functional materials, including macroscopic bulk, low-dimensional nanoscale particles, films, sheets, or tubes. The coexistence and correlations with other negative indexes (such as negative compressibility and negative thermal expansion) are also addressed. Finally, open questions and future research opportunities are proposed for functional materials with negative Poisson's ratios.

231 citations


Journal ArticleDOI
TL;DR: The results support the existence of a cross-plane intralayer negative Poisson's ratio in the constituent phosphorene layers under uniaxial deformation along the zigzag axis, which is in line with a previous theoretical prediction.
Abstract: The Poisson’s ratio of a material characterizes its response to uniaxial strain. Materials normally possess a positive Poisson’s ratio - they contract laterally when stretched, and expand laterally when compressed. A negative Poisson’s ratio is theoretically permissible but has not, with few exceptions of man-made bulk structures, been experimentally observed in any natural materials. Here, we show that the negative Poisson’s ratio exists in the low-dimensional natural material black phosphorus and that our experimental observations are consistent with first-principles simulations. Through applying uniaxial strain along armchair direction, we have succeeded in demonstrating a cross-plane interlayer negative Poisson’s ratio on black phosphorus for the first time. Meanwhile, our results support the existence of a cross-plane intralayer negative Poisson’s ratio in the constituent phosphorene layers under uniaxial deformation along the zigzag axis, which is in line with a previous theoretical prediction. The ...

175 citations


Journal ArticleDOI
TL;DR: With a computational cost at worst twice that of the noniterative scheme, the proposed algorithm provides significantly better quality, particularly at low signal-to-noise ratio, outperforming much costlier state-of-the-art alternatives.
Abstract: We denoise Poisson images with an iterative algorithm that progressively improves the effectiveness of variance-stabilizing transformations (VST) for Gaussian denoising filters. At each iteration, a combination of the Poisson observations with the denoised estimate from the previous iteration is treated as scaled Poisson data and filtered through a VST scheme. Due to the slight mismatch between a true scaled Poisson distribution and this combination, a special exact unbiased inverse is designed. We present an implementation of this approach based on the BM3D Gaussian denoising filter. With a computational cost at worst twice that of the noniterative scheme, the proposed algorithm provides significantly better quality, particularly at low signal-to-noise ratio, outperforming much costlier state-of-the-art alternatives.

126 citations


Journal ArticleDOI
TL;DR: The results reveal that a slight change of the arrival rate may greatly affect the fraction of unstable queues in the network and the gap between the sufficient conditions and the necessary conditions is small when the access probability, the density of transmitters, or the SINR threshold is small.
Abstract: We investigate the stable packet arrival rate region of a discrete-time slotted random access network, where the sources are distributed as a Poisson point process. Each of the sources in the network has a destination at a given distance and a buffer of infinite capacity. The network is assumed to be random but static, i.e., the sources and the destinations are placed randomly and remain static during all the time slots. We employ tools from queueing theory as well as point process theory to study the stability of this system using the concept of dominance. The problem is an instance of the interacting queues problem, further complicated by the Poisson spatial distribution. We obtain sufficient conditions and necessary conditions for stability. Numerical results show that the gap between the sufficient conditions and the necessary conditions is small when the access probability, the density of transmitters, or the SINR threshold is small. The results also reveal that a slight change of the arrival rate may greatly affect the fraction of unstable queues in the network.

120 citations


Journal ArticleDOI
TL;DR: In this article, the authors compare the predictions of Poisson DFN to new DFN models where fractures result from a growth process defined by simplified kinematic rules for nucleation, growth and fracture arrest.
Abstract: A major use of DFN models for industrial applications is to evaluate permeability and flow structure in hardrock aquifers from geological observations of fracture networks. The relationship between the statistical fracture density distributions and permeability has been extensively studied, but there has been little interest in the spatial structure of DFN models, which is generally assumed to be spatially random (i.e. Poisson). In this paper, we compare the predictions of Poisson DFNs to new DFN models where fractures result from a growth process defined by simplified kinematic rules for nucleation, growth and fracture arrest (Davy et al, 2010, 2013). This so-called ‘kinematic fracture model' is characterized by a large proportion of T-intersections, and a smaller number of intersections per fracture. Several kinematic models were tested and compared with Poisson DFN models with the same density, length and orientation distributions. Connectivity, permeability and flow distribution were calculated for 3D networks with a self-similar power-law fracture length distribution. For the same statistical properties in orientation and density, the permeability is systematically and significantly smaller by a factor of 1.5 to 10 for kinematic than for Poisson models. In both cases, the permeability is well described by a linear relationship with the areal density p32, but the threshold of kinematic models is 50% larger than of Poisson models. Flow channeling is also enhanced in kinematic DFN models. This analysis demonstrates the importance of choosing an appropriate DFN organization for predicting flow properties from fracture network parameters.

116 citations


Journal ArticleDOI
01 Oct 2016-Icarus
TL;DR: In this article, an approach for exact evaluation of a crater chronology model using Poisson statistics and Bayesian inference is presented, expressing the result as a likelihood function with an intrinsic uncertainty.

110 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose a Plug-and-Play-Prior approach for coupling Gaussian denoising algorithms to Poisson noisy inverse problems, which is based on a general approach termed "Plug-and Play-Prior".

109 citations


Journal ArticleDOI
TL;DR: This work compares eight model structures for mortality in the R language and discusses the possibility of forecasting with these models; in particular, the introduction of cohort terms generally leads to an improvement in overall fit, but can also make forecasting withThese models problematic.
Abstract: Many common models of mortality can be expressed compactly in the language of either generalized linear models or generalized non-linear models. The R language provides a description of these models which parallels the usual algebraic definitions but has the advantage of a transparent and flexible model specification. We compare eight model structures for mortality. For each structure, we consider (a) the Poisson models for the force of mortality with both log and logit link functions and (b) the binomial models for the rate of mortality with logit and complementary log–log link functions. Part of this work shows how to extend the usual smooth two-dimensional P-spline model for the force of mortality with Poisson error and log link to the other smooth two-dimensional P-spline models with Poisson and binomial errors defined in (a) and (b). Our comments are based on the results of fitting these models to data from six countries: Australia, France, Japan, Sweden, UK and USA. We also discuss the possibility o...

101 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the optimal proportional reinsurance strategy in a risk model with two dependent classes of insurance business, where the two claim number processes are correlated through a common shock component.
Abstract: In this paper, we consider the optimal proportional reinsurance strategy in a risk model with two dependent classes of insurance business, where the two claim number processes are correlated through a common shock component. Under the criterion of maximizing the expected exponential utility with the variance premium principle, we adopt a nonstandard approach to examining the existence and uniqueness of the optimal reinsurance strategy. Using the technique of stochastic control theory, closed-form expressions for the optimal strategy and the value function are derived for the compound Poisson risk model as well as for the Brownian motion risk model. From the numerical examples, we see that the optimal results for the compound Poisson risk model are very different from those for the diffusion model. The former depends not only on the safety loading, time, and the interest rate, but also on the claim size distributions and the claim number processes, while the latter depends only on the safety loading, time,...

99 citations


Journal ArticleDOI
TL;DR: In the present work, solvers for both problems have been developed and exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions.
Abstract: The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.

Journal ArticleDOI
TL;DR: In this paper, the consistency of the Poisson quasi-maximum likelihood estimator of the conditional mean parameter of a count time series model is studied when the parameter belongs to the interior of the parameter space and when it lies at the boundary.
Abstract: Regularity conditions are given for the consistency of the Poisson quasi-maximum likelihood estimator of the conditional mean parameter of a count time series model. The asymptotic distribution of the estimator is studied when the parameter belongs to the interior of the parameter space and when it lies at the boundary. Tests for the significance of the parameters and for constant conditional mean are deduced. Applications to specific integer-valued autoregressive (INAR) and integer-valued generalized autoregressive conditional heteroscedasticity (INGARCH) models are considered. Numerical illustrations, Monte Carlo simulations and real data series are provided.

Journal ArticleDOI
TL;DR: In this paper, the authors define a number of outlier detection algorithms related to the Huber-skip and least trimmed squares estimators, including the one-step Huberskip estimator and the forward search.
Abstract: Outlier detection algorithms are intimately connected with robust statistics that down-weight some observations to zero. We define a number of outlier detection algorithms related to the Huber-skip and least trimmed squares estimators, including the one-step Huber-skip estimator and the forward search. Next, we review a recently developed asymptotic theory of these. Finally, we analyse the gauge, the fraction of wrongly detected outliers, for a number of outlier detection algorithms and establish an asymptotic normal and a Poisson theory for the gauge.

Journal ArticleDOI
TL;DR: In this paper, a new class of inequalities, based on an iteration of the classical Poincare inequality, as well as on the use of Malliavin operators, of Stein's method, and of an integrated Mehler's formula, providing a representation of the Ornstein-Uhlenbeck semigroup in terms of thinned Poisson processes, is presented.
Abstract: We prove a new class of inequalities, yielding bounds for the normal approximation in the Wasserstein and the Kolmogorov distance of functionals of a general Poisson process (Poisson random measure). Our approach is based on an iteration of the classical Poincare inequality, as well as on the use of Malliavin operators, of Stein’s method, and of an (integrated) Mehler’s formula, providing a representation of the Ornstein-Uhlenbeck semigroup in terms of thinned Poisson processes. Our estimates only involve first and second order difference operators, and have consequently a clear geometric interpretation. In particular we will show that our results are perfectly tailored to deal with the normal approximation of geometric functionals displaying a weak form of stabilization, and with non-linear functionals of Poisson shot-noise processes. We discuss two examples of stabilizing functionals in great detail: (i) the edge length of the k-nearest neighbour graph, (ii) intrinsic volumes of k-faces of Voronoi tessellations. In all these examples we obtain rates of convergence (in the Kolmogorov and the Wasserstein distance) that one can reasonably conjecture to be optimal, thus significantly improving previous findings in the literature. As a necessary step in our analysis, we also derive new lower bounds for variances of Poisson functionals.


Journal ArticleDOI
TL;DR: The results highlight a few important distinctions of the Poisson case compared to the prior work including having to impose a minimum signal-to-noise requirement on each observed entry and a gap in the upper and lower bounds.
Abstract: We extend the theory of low-rank matrix recovery and completion to the case when Poisson observations for a linear combination or a subset of the entries of a matrix are available, which arises in various applications with count data. We consider the usual matrix recovery formulation through maximum likelihood with proper constraints on the matrix $M$ of size $d_{1}$ -by- $d_{2}$ , and establish theoretical upper and lower bounds on the recovery error. Our bounds for matrix completion are nearly optimal up to a factor on the order of ${\cal O}(\log(d_{1}d_{2}))$ . These bounds are obtained by combining techniques for recovering sparse vectors with compressed measurements in Poisson noise, those for analyzing low-rank matrices, as well as those for one-bit matrix completion [Davenport , “1-bit Matrix Completion, Information and Inference,” Information and Inference, vol. 3, no. 3, pp. 189–223, Sep. 2014] (although these two problems are different in nature). The adaptation requires new techniques exploiting properties of the Poisson likelihood function and tackling the difficulties posed by the locally sub-Gaussian characteristic of the Poisson distribution. Our results highlight a few important distinctions of the Poisson case compared to the prior work including having to impose a minimum signal-to-noise requirement on each observed entry and a gap in the upper and lower bounds. We also develop a set of efficient iterative algorithms and demonstrate their good performance on synthetic examples and real data.

Journal ArticleDOI
TL;DR: Cryptosporidium human dose‐response data from seven species/isolates are used to investigate six models of varying complexity that estimate infection probability as a function of dose, suggesting additional inactivation and removal via treatment may be needed to meet any specified risk target.
Abstract: Cryptosporidium human dose-response data from seven species/isolates are used to investigate six models of varying complexity that estimate infection probability as a function of dose. Previous models attempt to explicitly account for virulence differences among C. parvum isolates, using three or six species/isolates. Four (two new) models assume species/isolate differences are insignificant and three of these (all but exponential) allow for variable human susceptibility. These three human-focused models (fractional Poisson, exponential with immunity and beta-Poisson) are relatively simple yet fit the data significantly better than the more complex isolate-focused models. Among these three, the one-parameter fractional Poisson model is the simplest but assumes that all Cryptosporidium oocysts used in the studies were capable of initiating infection. The exponential with immunity model does not require such an assumption and includes the fractional Poisson as a special case. The fractional Poisson model is an upper bound of the exponential with immunity model and applies when all oocysts are capable of initiating infection. The beta Poisson model does not allow an immune human subpopulation; thus infection probability approaches 100% as dose becomes huge. All three of these models predict significantly (>10x) greater risk at the low doses that consumers might receive if exposed through drinking water or other environmental exposure (e.g., 72% vs. 4% infection probability for a one oocyst dose) than previously predicted. This new insight into Cryptosporidium risk suggests additional inactivation and removal via treatment may be needed to meet any specified risk target, such as a suggested 10-4 annual risk of Cryptosporidium infection.

Journal ArticleDOI
TL;DR: A theorem that allows the calculation of a class of functionals on Poisson point processes that have the form of expected values of sum-products of functions is proposed and proved and a variant of the Campbell-Mecke theorem from stochastic geometry is presented.
Abstract: We propose and prove a theorem that allows the calculation of a class of functionals on Poisson point processes that have the form of expected values of sum–products of functions. In proving the theorem, we present a variant of the Campbell–Mecke theorem from stochastic geometry. We proceed to apply our result in the calculation of expected values involving interference in wireless Poisson networks. Based on this, we derive outage probabilities for transmissions in a Poisson network with Nakagami fading. Our results extend the stochastic geometry toolbox used for the mathematical analysis of interference-limited wireless networks.

Journal ArticleDOI
TL;DR: In this paper, a new fault segmentation model was proposed to forecast time-independent and time-dependent earthquake ruptures in the Marmara region of Turkey for the next 30 years.
Abstract: We forecast time-independent and time-dependent earthquake ruptures in the Marmara region of Turkey for the next 30 years using a new fault segmentation model. We also augment time-dependent Brownian passage time (BPT) probability with static Coulomb stress changes (ΔCFF) from interacting faults. We calculate Mw > 6.5 probability from 26 individual fault sources in the Marmara region. We also consider a multisegment rupture model that allows higher-magnitude ruptures over some segments of the northern branch of the North Anatolian Fault Zone beneath the Marmara Sea. A total of 10 different Mw = 7.0 to Mw = 8.0 multisegment ruptures are combined with the other regional faults at rates that balance the overall moment accumulation. We use Gaussian random distributions to treat parameter uncertainties (e.g., aperiodicity, maximum expected magnitude, slip rate, and consequently mean recurrence time) of the statistical distributions associated with each fault source. We then estimate uncertainties of the 30 year probability values for the next characteristic event obtained from three different models (Poisson, BPT, and BPT + ΔCFF) using a Monte Carlo procedure. The Gerede fault segment located at the eastern end of the Marmara region shows the highest 30 year probability, with a Poisson value of 29% and a time-dependent interaction probability of 48%. We find an aggregated 30 year Poisson probability of M > 7.3 earthquakes at Istanbul of 35%, which increases to 47% if time dependence and stress transfer are considered. We calculate a twofold probability gain (ratio time dependent to time independent) on the southern strands of the North Anatolian Fault Zone.

Journal ArticleDOI
TL;DR: In this paper, it was shown that convergence in Wasserstein distance always implies convergence in the Kolmogorov distance at a possibly weaker rate for Poisson point processes.
Abstract: Peccati, Sole, Taqqu, and Utzet recently combined Stein’s method and Malliavin calculus to obtain a bound for the Wasserstein distance of a Poisson functional and a Gaussian random variable. Convergence in the Wasserstein distance always implies convergence in the Kolmogorov distance at a possibly weaker rate. But there are many examples of central limit theorems having the same rate for both distances. The aim of this paper was to show this behavior for a large class of Poisson functionals, namely so-called U-statistics of Poisson point processes. The technique used by Peccati et al. is modified to establish a similar bound for the Kolmogorov distance of a Poisson functional and a Gaussian random variable. This bound is evaluated for a U-statistic, and it is shown that the resulting expression is up to a constant the same as it is for the Wasserstein distance.

Journal ArticleDOI
TL;DR: In this article, the application of the Poisson inverse Gaussian (PIG) regression model for modeling motor vehicle crash data has been evaluated and compared with negative binomial (NB) model, especially when varying dispersion parameter is introduced.
Abstract: This article documents the application of the Poisson inverse Gaussian (PIG) regression model for modeling motor vehicle crash data The PIG distribution, which mixes the Poisson distribution and inverse Gaussian distribution, has the potential for modeling highly dispersed count data due to the flexibility of inverse Gaussian distribution The objectives of this article were to evaluate the application of PIG regression model for analyzing motor vehicle crash data and compare the results with negative binomial (NB) model, especially when varying dispersion parameter is introduced To accomplish these objectives, NB and PIG models were developed with fixed and varying dispersion parameters and compared using two data sets The results of this study show that PIG models perform better than the NB models in terms of goodness-of-fit statistics Moreover, the PIG model can perform as well as the NB model in capturing the variance of crash data Lastly, PIG models demonstrate almost the same prediction

Journal ArticleDOI
TL;DR: In this article, a Poisson multi-Bernoulli mixture (PMBM) conjugate prior for multiple extended object filtering is presented. But, the unknown data associations lead to an intractably large number of terms in the PMBM density, and approximations are necessary for tractability.
Abstract: This paper presents a Poisson multi-Bernoulli mixture (PMBM) conjugate prior for multiple extended object filtering. A Poisson point process is used to describe the existence of yet undetected targets, while a multi-Bernoulli mixture describes the distribution of the targets that have been detected. The prediction and update equations are presented for the standard transition density and measurement likelihood. Both the prediction and the update preserve the PMBM form of the density, and in this sense the PMBM density is a conjugate prior. However, the unknown data associations lead to an intractably large number of terms in the PMBM density, and approximations are necessary for tractability. A gamma Gaussian inverse Wishart implementation is presented, along with methods to handle the data association problem. A simulation study shows that the extended target PMBM filter performs well in comparison to the extended target d-GLMB and LMB filters. An experiment with Lidar data illustrates the benefit of tracking both detected and undetected targets.

Journal ArticleDOI
TL;DR: In this paper, the authors performed a round robin test to evaluate the capacity to measure Poisson's ratio of an asphalt mixture in the laboratory and to check whether it could be considered as an isotropic property.
Abstract: As part of RILEM TC 237-SIB, TG3 performed a Round Robin Test to evaluate the capacity to measure Poisson’s ratio of an asphalt mixture in the laboratory and to check whether it could be considered as an isotropic property. Five laboratories located in five different countries took part in the testing program. This paper presents the different techniques used by the laboratories, reports the measured Poisson’s ratios and comments upon the differences found between the results. Sinusoidal or haversine loading either in tension–compression or pure compression was applied to the specimens over a range of frequencies and temperatures. During the loading both the axial and radial strains were monitored to allow the complex Young’s modulus and the complex Poisson’s ratios to be calculated. It was found that the complex Young’s modulus and the complex Poisson’s ratios were very close in the Black Diagrams, but diverge sharply in the Cole–Cole plots. It was observed that the maximum difference between the complex Poisson’s ratio in direction 2 and direction 3 is less than 0.05. It would appear that this difference is more related to measurement deviation than anisotropy of the material. Some differences were observed in the master curves of complex Young’s modulus and complex Poisson’s ratio obtained from the five laboratories; however these differences could in most cases be explained by temperature differences. It was concluded that within the linear viscoelastic range (small strains) the results from the different laboratories show similar rheological behavior and the material response follows the same trend.

Journal ArticleDOI
TL;DR: A new control chart is developed based on the weighted likelihood ratio test, and it can be readily extended to other generalized profiles or profiles with random predictors if the likelihood function can be obtained.

Journal ArticleDOI
TL;DR: In this article, a unit cell structure with reentrant hollow skeleton was designed and its Poisson's ratio was studied using the finite element method (FEM) as a function of the geometric variables and the parent material Poisson ratio.

Journal ArticleDOI
TL;DR: A simulation study was performed to evaluate 6 regression methods fitted under a generalized estimating equation framework: binomial identity, Poisson identity, Normal identity, log binomial, log Poisson, and logistic regression model for reporting of absolute risk difference.
Abstract: Reporting of absolute risk difference (RD) is recommended for clinical and epidemiological prospective studies. In analyses of multicenter studies, adjustment for center is necessary when randomization is stratified by center or when there is large variation in patients outcomes across centers. While regression methods are used to estimate RD adjusted for baseline predictors and clustering, no formal evaluation of their performance has been previously conducted. We performed a simulation study to evaluate 6 regression methods fitted under a generalized estimating equation framework: binomial identity, Poisson identity, Normal identity, log binomial, log Poisson, and logistic regression model. We compared the model estimates to unadjusted estimates. We varied the true response function (identity or log), number of subjects per center, true risk difference, control outcome rate, effect of baseline predictor, and intracenter correlation. We compared the models in terms of convergence, absolute bias and coverage of 95 % confidence intervals for RD. The 6 models performed very similar to each other for the majority of scenarios. However, the log binomial model did not converge for a large portion of the scenarios including a baseline predictor. In scenarios with outcome rate close to the parameter boundary, the binomial and Poisson identity models had the best performance, but differences from other models were negligible. The unadjusted method introduced little bias to the RD estimates, but its coverage was larger than the nominal value in some scenarios with an identity response. Under the log response, coverage from the unadjusted method was well below the nominal value (<80 %) for some scenarios. We recommend the use of a binomial or Poisson GEE model with identity link to estimate RD for correlated binary outcome data. If these models fail to run, then either a logistic regression, log Poisson regression, or linear regression GEE model can be used.

Journal ArticleDOI
TL;DR: In this article, a two-dimensional quadrilateral cellular structure made from bi-material strips was designed and its thermal deformation behaviors were studied via experimental, analytical and numerical approaches.

Journal ArticleDOI
TL;DR: The objective of this study is to investigate and compare various alternate highway-rail grade crossing accident frequency models that can handle the under-dispersion issue and obtain insights regarding to vehicle crashes at public highway- rail grade crossings.

Journal ArticleDOI
TL;DR: In this article, an upper bound for an optimal transportation distance between an image process and a Poisson process on the target space is derived. Butler et al. showed that this upper bound depends on the distribution of the image process in the space of the point process.
Abstract: A Poisson or a binomial process on an abstract state space and a symmetric function f acting on k-tuples of its points are considered. They induce a point process on the target space of f. The main result is a functional limit theorem which provides an upper bound for an optimal transportation distance between the image process and a Poisson process on the target space. The technical background are a version of Stein’s method for Poisson process approximation, a Glauber dynamics representation for the Poisson process and the Malliavin formalism. As applications of the main result, error bounds for approximations of U-statistics by Poisson, compound Poisson and stable random variables are derived, and examples from stochastic geometry are investigated.

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of testing for parameter change in zero-inflated generalized Poisson (ZIGP) autoregressive models and verify that the ZIGP process is stationary and ergodic and that the conditional maximum likelihood estimator is strongly consistent and asymptotically normal.
Abstract: In this paper, we consider the problem of testing for parameter change in zero-inflated generalized Poisson (ZIGP) autoregressive models. We verify that the ZIGP process is stationary and ergodic and that the conditional maximum likelihood estimator (CMLE) is strongly consistent and asymptotically normal. Based on these results, we construct CMLE- and residual-based cumulative sum tests and show that their limiting null distributions are a function of independent Brownian bridges. The simulation results are provided for illustration. A real data analysis is performed on some crime data of Australia.