scispace - formally typeset
Search or ask a question

Showing papers in "Statistics, Optimization and Information Computing in 2018"


Journal ArticleDOI
TL;DR: In this paper, the problem of fitting real data collected by the Florida HRSV surveillance system by using a periodic SEIRS mathematical model was addressed, and a sensitivity and cost-effectiveness analysis of the model was done and an optimal control problem was formulated and solved with treatment as the control variable.
Abstract: A state wide Human Respiratory Syncytial Virus (HRSV) surveillance system was implemented in Florida in 1999 to support clinical decision-making for prophylaxis of premature infants. The research presented in this paper addresses the problem of fitting real data collected by the Florida HRSV surveillance system by using a periodic SEIRS mathematical model. A sensitivity and cost-effectiveness analysis of the model is done and an optimal control problem is formulated and solved with treatment as the control variable.

24 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied variational problems with functionals containing the Caputo-Fabrizio fractional derivative with a non-singular kernel, which is a variant of the CFA.
Abstract: This paper is devoted to study some variational problems with functionals containing the Caputo-Fabrizio fractional derivative, that is a fractional derivative with a non-singular kernel.

24 citations


Journal ArticleDOI
TL;DR: In this work, the weighted versions of Bayesian predictor, perceptron, multilayer perceptrons, SVM, and decision tree are developed and it is shown how their results would be different from their non-weighted versions.
Abstract: Sometimes not all training samples are equal in supervised machine learning. This might happen in different applications because some training samples are measured by more accurate devices, training samples come from different sources with different reliabilities, there is more confidence on some training samples than others, some training samples are more relevant than others, or for any other reason the user wants to put more emphasis on some training samples. Non-weighted machine learning techniques are designed for equally important training samples: (a) the cost of misclassification is equal for training samples in parametric classification techniques, (b) residuals are equally important in parametric regression models, and (c) when voting in non-parametric classification and regression models, training samples either have equal weights or their weights are determined internally by kernels in the feature space, thus no external weights. Weighted least squares model is an example of a weighted machine learning technique which takes the training samples’ weights into account. In this work, we develop the weighted versions of Bayesian predictor, perceptron, multilayer perceptron, SVM, and decision tree and show how their results would be different from their non-weighted versions.

23 citations


Journal ArticleDOI
TL;DR: In this article, the authors study the population's growth with a fractional differential equation, where the order of the fractional derivative is a function depending on time and the goal is to determine the fraction fractional order function that better fits the given data.
Abstract: The objective is to study the population's growth with a fractional differential equation. The order of the fractional derivative is a function depending on time and the goal is to determine the fractional order function that better fits the given data. The model is than tested to describe the world population growth and of some countries. All the numerical experiments were done in MATLAB, using the routines lsqcurvefit, fminunc and spline.

21 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider a comb product between two connected graphs and determine the fractional metric dimension of the comb product, where the vertex vertices are resolved by a real resolving function.
Abstract: A vertex $z$ in a connected graph $G$ \textit{resolves} two vertices $u$ and $v$ in $G$ if $d_G(u,z) eq d_G(v,z)$. \ A set of vertices $R_G\{u,v\}$ is a set of all resolving vertices of $u$ and $v$ in $G$. \ For every two distinct vertices $u$ and $v$ in $G$, a \textit{resolving function} $f$ of $G$ is a real function $f:V(G)\rightarrow[0,1]$ such that $f(R_G\{u,v\})\geq1$. \ The minimum value of $f(V(G))$ from all resolving functions $f$ of $G$ is called the \textit{fractional metric dimension} of $G$. \ In this paper, we consider a graph which is obtained by the comb product between two connected graphs $G$ and $H$, denoted by $G\rhd_o H$. \ For any connected graphs $G$, we determine the fractional metric dimension of $G\rhd_o H$ where $H$ is a connected graph having a stem or a major vertex.

19 citations


Journal ArticleDOI
TL;DR: In this paper, the optimal control problem in the feedback form (synthesis) for a parabolic equation with rapidly oscillating coefficients and not-decomposable quadratic cost functional with superposition type operator was considered.
Abstract: In this paper, we consider the optimal control problem in the feedback form (synthesis) for a parabolic equation with rapidly oscillating coefficients and not-decomposable quadratic cost functional with superposition type operator. In general, to find the exact formula of optimal synthesis is not possible for such a problem because the Fourier method can’t be directly applied. But the transition to the homogenized parameters greatly simplifies the structure of the problem. Assuming that the problem with the homogenized coefficients already admits optimal synthesis form, we ground approximate optimal control in the feedback form for the initial problem. We give an example of superposition operator for specific conditions in this paper.

13 citations


Journal ArticleDOI
TL;DR: The aim of this paper is to employ the non-radial measure (Slack-based model) in DEA window analysis to measure the Technical Efficiency change over years of various Indian cement companies.
Abstract: In data envelopment analysis (DEA), the concept of efficiency is examined either in radial measure or in non-radial measure. Radial measure gives the proportion or a percentage by which all the inputs (outputs) to be reduced(increased) simultaneously, while as the non-radial measure deals directly with the input output slacks which must be minimized, hence inputs and outputs need not to optimized by same proportion. Radial measures often provide a weak efficiency score while as there is no such case for Non-radial models. The aim of this paper is to employ the non-radial measure(Slack-based model) in DEA window analysis to measure the Technical Efficiency change over years of various Indian cement companies. The research is based on unbalanced panel data of Indian cement companies during the period 2005-15. DEA window analysis is used to determine the efficiency of cement companies and to observe the possibility of changes in the technical efficiency over time. A study is conducted to evaluate the efficiency of cement companies in India in order to identify the sources of inefficiencies and formulate proposals for improving the productivity of those companies and their operations through a three-year window analysis.

11 citations


Journal ArticleDOI
TL;DR: This paper presents Nonparametric Predictive Predictive Inference for best linear combination of two biomarkers, where the dependence of the two biomarker is modelled using parametric copulas, and comments on the results of a simulation study.
Abstract: Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine, machine learning and credit scoring. The receiver operating characteristic (ROC) curve is a useful tool to assess the ability of a diagnostic test to discriminate among two classes or groups. In practice, multiple diagnostic tests or biomarkers may be combined to improve diagnostic accuracy, e.g. by maximizing the area under the ROC curve. In this paper we present Nonparametric Predictive Inference (NPI) for best linear combination of two biomarkers, where the dependence of the two biomarkers is modelled using parametric copulas. NPI is a frequentist statistical method that is explicitly aimed at using few modelling assumptions, enabled through the use of lower and upper probabilities to quantify uncertainty. The combination of NPI for the individual biomarkers, combined with a basic parametric copula to take dependence into account, has good robustness properties and leads to quite straightforward computation. We briefly comment on the results of a simulation study to investigate the performance of the proposed method in comparison to the empirical method. An example with data from the literature is provided to illustrate the proposed method, and related research problems are briefly discussed.

10 citations


Journal ArticleDOI
TL;DR: In this article, a numerical solution of the modified Kortewegde Vries (MKdV) equation was obtained by a numerical technique attributed on subdomain finite element method using quartic B-splines.
Abstract: In this article, we have obtained numerical solutions of the modified Kortewegde Vries (MKdV) equation by a numerical technique attributed on subdomain finite element method using quartic B-splines. The proposed numerical algorithm is controlled by applying three test problems including single solitary wave, interaction of two and three solitary waves. To inspect the performance of the newly applied method, the error norms, L2 and L1, as well as the four lowest invariants, I1,I2; I3 and I4, have been computed. Linear stability analysis of the algorithm is also examined.

9 citations


Journal ArticleDOI
TL;DR: This work proposes a simple mathematical model for unemployment that is more realistic and useful than recent models available in the literature and provides some non-trivial and interesting conclusions.
Abstract: We propose a simple mathematical model for unemployment. Despite its simpleness, we claim that the model is more realistic and useful than recent models available in the literature. A case study with real data from Portugal supports our claim. An optimal control problem is formulated and solved, which provides some non-trivial and interesting conclusions.

9 citations


Journal ArticleDOI
TL;DR: This work is based on the research supported in part by the National Research Foundation of South Africa (SARChI Research Chair- UID: 71199; and Grant ref. CPRR160403161466 nr. 105840).
Abstract: The authors acknowledge the support of the StatDisT group. This work is based on the research supported in part by the National Research Foundation of South Africa (SARChI Research Chair- UID: 71199; and Grant ref. CPRR160403161466 nr. 105840).

Journal ArticleDOI
TL;DR: Experimental results show that the proposed approach becomes more efficient with increasing size of the analysed dataset and increases the reliability of data partitioning into groups.
Abstract: In this paper, a new method for anomaly detection based on weighted clustering is proposed. The weights that were obtained by summing the weights of each point from the data set are assigned to clusters. The comparison is made using seven datasets (of large dimensions) with the k-means algorithm. The proposed approach increases the reliability of data partitioning into groups. Experimental results show that the proposed approach becomes more efficient with increasing size of the analysed dataset.

Journal ArticleDOI
TL;DR: The results show not only that the cost of implement control policies is a crucial parameter for the spreading of marketing messages, but also that low investment costs in control strategies fulfill the proposed trade-off without compromising the financial capacity of a company.
Abstract: The complexity of optimal control problems requires the use of numerical methods to compute control and optimal state trajectories for a dynamical system, aiming to optimize a particular performance index. Considering a real viral advertisement, this article compares the dynamics of a viral marketing epidemic model with optimal control under different cost scenarios and from two perspectives: using numerical methods based on the Pontryagin's Maximum Principle (indirect methods) and methods that treat the optimal control problem as a nonlinear constrained optimization problem (direct methods). Based on the trade-off between the maximization of information spreading and the minimization of the costs associated to it, an optimal control problem is formulated and studied. The existence and uniqueness of the solution are proved. Our results show not only that the cost of implement control policies is a crucial parameter for the spreading of marketing messages, but also that low investment costs in control strategies fulfill the proposed trade-off without compromising the financial capacity of a company.

Journal ArticleDOI
TL;DR: In this article, a new variational calculus based on the general quantum difference operator was developed, and optimality conditions for generalized variational problems where the Lagrangian may depend on the endpoints conditions and a real parameter were obtained.
Abstract: We develop a new variational calculus based in the general quantum difference operator recently introduced by Hamza et al. In particular, we obtain optimality conditions for generalized variational problems where the Lagrangian may depend on the endpoints conditions and a real parameter, for the basic and isoperimetric problems, with and without fixed boundary conditions. Our results provide a generalization to previous results obtained for the $q$- and Hahn-calculus.

Journal ArticleDOI
TL;DR: In this article, conditions for the convergence of stochastic processes from the space π(Omega) to the Euclidean space were investigated. And the limit theorem of π (Omega)-convergence was obtained.
Abstract: This paper is devoted to the investigation of conditions for the eak convergence in the space $C(T)$ of the stochastic processes from the space $\mathbf{F}_\psi(\Omega)$. Using this conditions the limit theorem for stochastic processes from the space $\mathbf{F}_\psi(\Omega)$ has been obtained. This theorem can be utilized for gaining the given approximation accuracy and reliability of integrals depending on parameter by Monte Carlo method.

Journal ArticleDOI
TL;DR: A new iterative method based on the quasi-Newton approach for solving systems of nonlinear equations, especially large scale, is proposed, using the weighted combination of the Trapezoidal and Simpson quadrature rules.
Abstract: A new iterative method based on the quasi-Newton approach for solving systems of nonlinear equations, especially large scale is proposed. We used the weighted combination of the Trapezoidal and Simpson quadrature rules. Our goal is to enhance the efficiency of the well known Broyden method by reducing the number of iterations it takes to reach a solution. Local convergence analysis and computational results are given.

Journal ArticleDOI
TL;DR: In this paper, the covariant reduced projectable forward difference operator is used for a covariant discretization of the main elements of a variational theory: the jet bundle, the Lagrangian density and the associated action functional.
Abstract: Retraction maps on Lie groups can be successfully used in mechanics and control theory to generate numerical integration schemes, for ordinary differential equations with a variational origin, recovering at the same time a discrete version of the energy and symplectic structure conservation properties, that are characteristic of smooth variational mechanics. The present work fixes the specific tool that plays in gauge field theories the same role as retraction maps on geometric mechanics. This tool, the covariant reduced projectable forward difference operator, can be used for a covariant discretization of the main elements of a variational theory: the jet bundle, the Lagrangian density and the associated action functional. Particular interest is dedicated to the trivialized formulation of a gauge field theory, and its reduction into a theory where fields are given as principal connections and $H$-structures. Main characteristics of the presented method are its covariance by gauge transformations and the commutation of the discretization and the reduction processes.

Journal ArticleDOI
TL;DR: A novel swift algorithm for contrast enhancement in images of low-contrast is introduced, which provided better results than those produced by several contemporary techniques in terms of recorded accuracy and perceived quality.
Abstract: Contrast enhancement plays a significant role in many existing image-related applications. In various situations, conventional contrast enhancement techniques failed to produce acceptable results for a wide variety of low-contrast images. As a result, various innovative techniques have been proposed for the purpose of contrast enhancement. Despite that, this field is still open for research due to its indispensability in many scientific disciplines and to various unavoidable real-world limitations. Hence, this article introduces a novel swift algorithm for contrast enhancement in images of low-contrast. The processing concept of this algorithm is straightforward. Initially, a non-complex logarithmic function is applied as a preprocessing step to attenuate the immoderate pixel values. Then, a new non-linear enhancement function which is designed experimentally based on mathematical, statistical and spatial information is applied to modify the brightness and contrast. Finally, a regularization function is applied as a post-processing step to rearrange the image pixels into their natural dynamic range. Experimental results revealed the favorability of the proposed algorithm, as it provided better results than those produced by several contemporary techniques in terms of recorded accuracy and perceived quality.

Journal ArticleDOI
TL;DR: In this article, a numerical integration algorithm for discrete Euler-Poincare equations with variational principles with differential constraints is presented. But this algorithm is not applicable to the case of variational principle with constraints.
Abstract: In the reduction of field theories in principal $G$-bundles, when a subgroup $H\subset G$ acts by symmetries of the Lagrangian, each of the $H$-reduced unknown fields decomposes as a flat principal connection and a parallel H-structure. A suitable variational principle with differential constraints on such fields leads to necessary criticality conditions known as Euler-Poincare equations. We model constrained discrete variational theories on a simplicial complex and generate from the smooth theory, in a covariant way, a discrete variational formulation of $H$-reduced field theories. Critical fields in this formulation are characterized by a corresponding discrete version of Euler-Poincare equations. We present a numerical integration algorithm for discrete Euler-Poincare equations, that extends integration algorithms for Euler-Poincare equations in mechanics to the case of field theories and, also, extends integration algorithms for Euler-Lagrange equations in discrete field theories to the case of variational principles with constraints. For regular reduced discrete Lagrangians, this algorithm allows to univocally propagate initial condition data, on an initial condition band, into a solution of the corresponding equations for the discrete variational principle.

Journal ArticleDOI
TL;DR: A sure independence screening procedure based on the distance correlation between predictors and marginal distribution function of the response variable is developed and a double penalization based procedure is applied to identify nonzero and linear components, simultaneously.
Abstract: In this paper, we introduce a two-step procedure, in the context of ultrahigh-dimensional additive models, to identify nonzero and linear components. We first develop a sure independence screening procedure based on the distance correlation between predictors and marginal distribution function of the response variable to reduce the dimensionality of the feature space to a moderate scale. Then a double penalization based procedure is applied to identify nonzero and linear components, simultaneously. We conduct extensive simulation experiments to evaluate the numerical performance of the proposed method and analyze a cardiomyopathy microarray data for an illustration. Numerical studies confirm the fine performance of the proposed method for various semiparametric models.

Journal ArticleDOI
TL;DR: A novel wrapper method based on a hybridized support vector machine and recursive feature elimination with information-theoretic measure of complexity (ICOMP) is introduced and developed to classify high-dimensional data sets and to carry out subset selection of the features in the original data space for finding the best subset of features which are discriminating between the groups.
Abstract: In statistical data mining research, datasets often have nonlinearity and at the same time high-dimensionality. It has become difficult to analyze such datasets in a comprehensive manner using traditional statistical methodologies. In this paper, a novel wrapper method called SVM-ICOMP-RFE based on a hybridized support vector machine (SVM) and recursive feature elimination (RFE) with information-theoretic measure of complexity (ICOMP) is introduced and developed to classify high-dimensional data sets and to carry out subset selection of the features in the original data space for finding the best subset of features which are discriminating between the groups. Recursive feature elimination (RFE) ranks features based on information complexity (ICOMP) criterion. ICOMP plays an important role not only in choosing an optimal kernel function from a portfolio of many other kernel functions, but also in selecting important subset(s) of features. The potential and the flexibility of our approach are illustrated on two real benchmark data sets, one is ionosphere data which includes radar returns from the ionosphere, and another is aorta data which is used for the early detection of atheroma most commonly resulting heart attack. Also, the proposed method is compared with other RFE based methods using different measures (i.e., weight and gradient) for feature rankings.

Journal ArticleDOI
TL;DR: Leadbeter et al. as mentioned in this paper generalized the extreme value theory of i.i.d. in the case of the stationary process and defined an extremal index for measuring the degree of dependence at the extremes, this parameter measures how the extremes cluster together and $1/\theta$ is interpreted as the average size of these clusters.
Abstract: Leadbeter et al. ( M.R.,G.Leadbetter, G.Lindgren,and H.Rootzen, Extremes and Related Properties of Random Sequences and Processes, Springer Series in Statistics. Springer-Verlag: New York, 1983. ) have generalized the extreme value theory of i.i.d. in the case of the stationary process, where it have defined an extremal index $\theta\in]0,1[$ for measuring the degree of dependence at the extremes, this parameter measures how the extremes cluster together and $1/\theta$ is interpreted as the average size of these clusters. Using this parameter and the Peak Over Threshold method which involves the Generalized Pareto Distribution we estimate in this work the extreme quantile and the conditional tail expectation for EUR/USD returns.

Journal ArticleDOI
TL;DR: In this paper, the exact distribution of the linear combination of p independent logistic random variables is studied and two near-exact approximations are developed for this distribution. But, the performance of the approximated distribution is limited.
Abstract: In this work the exact distribution of the linear combination of p independent logistic random variables is studied. It is shown that the exact distribution may be represented as a shifted infinite sum of independent random variables distributed as the difference of two independent Generalized Integer Gamma distributions. In addition, two near-exact approximations are developed for this distribution. Numerical studies are conducted to access the degree of precision and also the computational performance of these approximations. The developed methodology is used to derive near-exact approximations for the linear combination of independent generalized logistic random variables.

Journal ArticleDOI
TL;DR: In this article, a flexible cure rate model was proposed by assuming the number of competing causes for the event of interest to follow the Conway-Maxwell Poisson distribution and the lifetimes of the non-cured individuals to follow a proportional odds model.
Abstract: Cure rate models are useful while modelling lifetime data involving long time survivors. In this work, we discuss a flexible cure rate model by assuming the number of competing causes for the event of interest to follow the Conway-Maxwell Poisson distribution and the lifetimes of the non-cured individuals to follow a proportional odds model. The baseline distribution is considered to be either Weibull or log-logistic distribution. Under right censoring, we develop the maximum likelihood estimators using EM algorithm. Model discrimination among some well-known special cases are discussed under both likelihood- and information-based criteria. An extensive simulation study is carried out to examine the performance of the proposed model and the inferential methods. Finally, a cutaneous melanoma dataset is analyzed for illustrative purpose.

Journal ArticleDOI
TL;DR: In this paper, an approximation on the distribution of population densities and the arrangement of urban activities, over a set of n locations, is derived by using the classical multiobjective optimization theory and Shannon entropy.
Abstract: In this paper, an approximation on the distribution of population densities and the arrangement of urban activities, over a set of n locations, is derived by using the classical multiobjective optimization theory and Shannon entropy.

Journal ArticleDOI
TL;DR: In this article, the use of Extended Generalized Lambda distribution (XGLD) has been proposed for approximating the statistical distribution of data and finding a quality control chart based on it.
Abstract: In quality control (QC) of the individual pieces tested, the distribution of the intended variable is not normal in many cases. ‎Therefore, the use of conventional statistical methods to design the control charts (X chart) can lead to misleading and inaccurate results, and hence, it can cause problems to users; because in such a situation it would not be expected that the failure in the production line will be detected in a timely manner. ‎Therefore, in this study, the use of Extended Generalized Lambda distribution (XGLD) has been proposed for approximating the statistical distribution of data and finding a quality control chart based on it. ‎For this purpose, the distribution function of data was firstly estimated by the XGLD, and then, based on this distribution, the control limits were calculated. ‎The research data consists of the tensile strength of 149 aluminum plates categorized into 22 classes. ‎For the goodness of fit test, Chi-square test was utilized and to evaluate the effectiveness of the proposed model, the average run length (ARL) method was used. The results showed that the ARL value‏‎s‎ in the proposed method are lower than that in the conventional method. ‎Since this is true at all points, it can be concluded that the use of the new method can lead to the faster detection of the system’s failure and consequently, the subsequent costs can be reduced.‎In addition, since the XGLD is a highly flexible distribution and can estimate many conventional (and even unconventional) distributions with high precision, the use of the proposed method instead of using other cases which are based on distributions other than normal, will increase the accuracy of the system’s failure detection.‎ ‎

Journal ArticleDOI
TL;DR: In this article, difference-based Liu and non-Liu type shrinkage estimators along with their positive parts in the semiparametric linear model, when the errors are dependent and some nonstochastic linear restrictions are imposed.
Abstract: In this article, under a multicollinearity setting, we define difference-based Liu and non-Liu type shrinkage estimators along with their positive parts in the semiparametric linear model, when the errors are dependent and some nonstochastic linear restrictions are imposed. We derive the biases and exact risk expressions of these estimators and obtain the region of optimality of each estimator. Also, necessary and sufficient conditions, for the superiority of the difference-based Liu estimator over its counterpart, for choosing the Liu parameter d are established. Finally, we illustrate the performance of these estimators with a simulation study.

Journal ArticleDOI
Mahdi Roozbeh1
TL;DR: In this article, the ridge estimators are considered and their restricted regression counterparts are proposed when the errors are dependent under a multicollinearity and high-dimensionality setting.
Abstract: Modern statistical analysis often encounters linear models with the number of explanatory variables much larger than the sample size. Estimation in these high-dimensional problems needs some regularization methods to be employed due to rank deficiency of the design matrix. In this paper, the ridge estimators are considered and their restricted regression counterparts are proposed when the errors are dependent under a multicollinearity and high-dimensionality setting. The asymptotic distributions of the proposed estimators are exactly derived. Incorporating the information contained in the restricted estimator, a shrinkage type ridge estimator is also exhibited and its asymptotic risk is analyzed under some special cases. To evaluate the efficiency of the proposed estimators, a Monte-Carlo simulation along with a real example are considered.

Journal ArticleDOI
TL;DR: A novel innovative approach is proposed in which using dual tree complex wavelet transform (DT-CWT), low and high frequency sub bands are generated and improved NEDI (INEDI) algorithm gives better results on high frequency components which lead to high resolution image without artifacts.
Abstract: Super resolution is a method that reconstructs a higher resolution image from single captured image or set of captured low resolution images. Super resolution imaging is used for several image processing applications like medical imaging, earth observation systems and surveillance systems. Image interpolation is one of the conventional methods used to enhance the resolution of the image. Basic linear interpolation methods like bilinear, bicubic give the blurred image as a result. Non-linear interpolation methods like New Edge Directed Interpolation (NEDI), Curvature based interpolation, neural network based interpolation enhance the image but has limitations like several artifacts. In this paper, a novel innovative approach is proposed in which using dual tree complex wavelet transform (DT-CWT), low and high frequency sub bands are generated. High frequency sub band images are interpolated using improved NEDI which is NEDI with a circular window and dynamic window. Improved NEDI (INEDI) algorithm proposed in the paper gives better results on high frequency components which lead to high resolution image without artifacts. Inverse DT-CWT is applied on interpolated sub bands to reconstruct high resolution image. Registration is applied on both images and shift adaptable bilinear interpolation is applied which reconstructs image into 4 interpolation factor. The proposed approach is verified for different interpolation factors and for different satellite images. The accuracy of proposed approach is verified by several contrast features. The algorithm proposed in this paper outperforms in comparison to state of the art algorithms.

Journal ArticleDOI
Esin Avci1
TL;DR: In this article, the authors demonstrate the flexibility of the Conway-Maxwell-Poisson (COM-POisson) regression model on simulation and alg data and show that it is suitable for modeling over and under-dispersion distribution.
Abstract: The Poisson regression model is the most common model for fitting count data. However, it is suitable only for modeling equi-dispersed distribution. The Conway-Maxwell-Poisson (COM-Poisson) regression model allows modeling over and under-dispersion distribution. The purpose of this study is to demonstrate the flexibility of the Conway-Maxwell-Poisson (COM-Poisson) regression model on simulation and alg data.