scispace - formally typeset
Search or ask a question

Showing papers in "Statistics, Optimization and Information Computing in 2020"


Journal ArticleDOI
TL;DR: In this paper, the unknown parameter of the new model was estimated using the maximum likelihood method, Cramér-Von-Mises method, bootstrapping method, least square method and weighted least squares method.
Abstract: In this paper and after introducing a new model along with its properties, we estimate the unknown parameter of the new model using the maximum likelihood method, Cramér-Von-Mises method, bootstrapping method, least square method and weighted least square method. We assess the performance of all estimation method employing simulations. All methods perform well but bootstrapping method is the best in modeling relief times whereas the maximum likelihood method is the best in modeling survival times. Censored data modeling with covariates is addressed along with the index plot of the modified deviance residuals and its Q-Q plot.

21 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied higher order fractional symmetric duality over arbitrary cones for Mond-Weir type programs under higher-order K-(C, α, ρ, d)-convexity/pseudoconvexy assumptions.
Abstract: In this paper, we introduce the definition of higher-order K-(C, α, ρ, d)-convexity/pseudoconvexity over cone and discuss a nontrivial numerical examples for existing such type of functions. The purpose of the paper is to study higher order fractional symmetric duality over arbitrary cones for nondifferentiable Mond-Weir type programs under higher- order K-(C, α, ρ, d)-convexity/pseudoconvexity assumptions. Next, we prove appropriate duality relations under aforesaid assumptions.

21 citations



Journal ArticleDOI
TL;DR: The proposed single direction with double step length method for solving systems of nonlinear equations is presented and is proven to be globally convergent under appropriate conditions.
Abstract: In this paper, a single direction with double step length method for solving systems of nonlinear equations is presented. Main idea used in the algorithm is to approximate the Jacobian via acceleration parameter. Furthermore, the two step lengths are calculated using inexact line search procedure. This method is matrix-free, and so is advantageous when solving large-scale problems. The proposed method is proven to be globally convergent under appropriate conditions. The preliminary numerical results reported in this paper using a large-scale benchmark test problems show that the proposed method is practically quite effective.

11 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a new command for the identification of overdispersion in the data as an alternative to the procedure presented by Cameron and Trivedi [5], since it directly identifies overdisparity in data without the need to previously estimate a specific type of count-data model.
Abstract: Stata has several procedures that can be used in analyzing count-data regression models and, more specifically, in studying the behavior of the dependent variable, conditional on explanatory variables. Identifying overdispersion in countdata models is one of the most important procedures that allow researchers to correctly choose estimations such as Poisson or negative binomial, given the distribution of the dependent variable. The main purpose of this paper is to present a new command for the identification of overdispersion in the data as an alternative to the procedure presented by Cameron and Trivedi [5], since it directly identifies overdispersion in the data, without the need to previously estimate a specific type of count-data model. When estimating Poisson or negative binomial regression models in which the dependent variable is quantitative, with discrete and non-negative values, the new Stata package overdisp helps researchers to directly propose more consistent and adequate models. As a second contribution, we also present a simulation to show the consistency of the overdispersion test using the overdisp command. Findings show that, if the test indicates equidispersion in the data, there are consistent evidence that the distribution of the dependent variable is, in fact, Poisson. If, on the other hand, the test indicates overdispersion in the data, researchers should investigate more deeply whether the dependent variable actually exhibits better adherence to the Poisson-Gamma distribution or not.

11 citations


Journal ArticleDOI
TL;DR: A simple S1S2IT−type model with variable population size is presented and the results show that the top three parameters that govern the dynamics of the black pod disease are the treatment rate, transmission rate, and planting rate of new trees.
Abstract: Black pod disease is caused by fungi of the species Phytophthora palmivora or Phytophthora megakarya. The disease causes darkening of affected areas of cocoa trees and/or fruits and leads to significant reduction in crop yields and decreases lifespan of the plant. This study presents a simple S_1S_2IT-type model with variable population size to assess the impact of fungicide treatment on the dynamics of the black pod disease. We do both theoretical studies and numerical simulations of the model. In particular, we analyze the existence of equilibrium points and their stability, simulate the model using data on reported black pod cases from Ghana. In addition, we perform sensitivity analysis of the basic reproduction number with respect to the model parameters. The results show that the top three parameters that govern the dynamics of the black pod disease are the treatment rate, transmission rate, and planting rate of new trees

9 citations



Journal ArticleDOI
TL;DR: In this paper, a new algorithm for the computation of the implied covariance matrix is introduced, which consists of a modification of the finite iterative method, which is characterized by the manner of the computation and based on some apriori assumptions.
Abstract: Structural Equation Modeling (SEM) is a statistical technique that assesses a hypothesized causal model byshowing whether or not, it fits the available data. One of the major steps in SEM is the computation of the covariance matrix implied by the specified model. This matrix is crucial in estimating the parameters, testing the validity of the model and, make useful interpretations. In the present paper, two methods used for this purpose are presented: the J¨oreskog’s formula and the finite iterative method. These methods are characterized by the manner of the computation and based on some apriori assumptions. To make the computation more simplistic and the assumptions less restrictive, a new algorithm for the computation of the implied covariance matrix is introduced. It consists of a modification of the finite iterative method. An illustrative example of the proposed method is presented. Furthermore, theoretical and numerical comparisons between the exposed methods with the proposed algorithm are discussed and illustrated

8 citations


Journal ArticleDOI
TL;DR: A generalized modification of the Kumaraswamy distribution is proposed, and its distributional and characterizing properties are studied, to illustrate the usefulness and the flexibility of this distribution in application to real-life data.
Abstract: In this paper, a generalized modification of the Kumaraswamy distribution is proposed, and its distributional and characterizing properties are studied. This distribution is closed under scaling and exponentiation, and has some well-known distributions as special cases, such as the generalized uniform, triangular, beta, power function, Minimax, and some other Kumaraswamy related distributions. Moment generating function, Lorenz and Bonferroni curves, with its moments consisting of the mean, variance, moments about the origin, harmonic, incomplete, probability weighted, L, and trimmed L moments, are derived. The maximum likelihood estimation method is used for estimating its parameters and applied to six different simulated data sets of this distribution, in order to check the performance of the estimation method through the estimated parameters mean squares errors computed from the different simulated sample sizes. Finally, four real-life data sets are used to illustrate the usefulness and the flexibility of this distribution in application to real-life data.

8 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss some drawbacks of the cardinality constrained mean-variance (CCMV) portfolio optimization with short selling and risk-neutral interest rate when the lower and upper bounds of the assets contributions are − 1 K and 1 K (K denotes the number of assets in portfolio).
Abstract: In this paper, first, we discuss some drawbacks of the cardinality constrained mean-variance (CCMV) portfolio optimization with short selling and risk-neutral interest rate when the lower and upper bounds of the assets contributions are − 1 K and 1 K (K denotes the number of assets in portfolio). Second, we present an improved variant using absolute returns instead of the returns to include short selling in the model. Finally, some numerical results are provided using the data set of the S&P 500 index, Information Technology, and the MIBTEL index in terms of returns and Sharpe ratios to compare the proposed models with those in the literature.

8 citations


Journal ArticleDOI
TL;DR: In this paper, the estimation of parameters and reliability characteristics of Lindley distribution under random censoring is discussed. And the maximum likelihood estimators of the unknown parameters and the reliability characteristics are derived.
Abstract: This article deals with the estimation of parameters and reliability characteristics of Lindley distribution underrandom censoring. Expected time on test based on randomly censored data is obtained. The maximum likelihood estimators of the unknown parameters and reliability characteristics are derived. The asymptotic, bootstrap p and bootstrap t confidence intervals of the parameters are constructed. The Bayes estimators of the parameters and reliability characteristics under squared error loss function using non-informative and gamma informative priors are obtained. For computing of Bayes estimates, Lindley approximation and MCMC methods are considered. Highest posterior density (HPD) credible intervals of the parameters are obtained using MCMC method. Various estimation procedures are compared using a Monte Carlo simulation study. Finally, a real data set is analyzed for illustration purposes.

Journal ArticleDOI
TL;DR: It can be observed that the among these three different selection of Genetic Algorithm,Rank selected GA is better than the other two selection (Tournament,Roulette wheel) in terms of the accuracy of final solutions, success rate, convergence speed, and stability.
Abstract: Estimation of a highly accurate model for liquid flow process industry and control of the liquid flow rate from experimental data is an important task for engineers due to its non linear characteristics. Efficient optimization techniques are essential to accomplish this task.In most of the process control industry flowrate depends upon a multiple number of parameters like sensor output,pipe diameter, liquid conductivity ,liquid viscosity & liquid density etc.In traditional optimization technique its very time consuming for manually control the parameters to obtain the optimial flowrate from the process.Hence the alternative approach , computational optimization process is utilized by using the different computational intelligence technique.In this paper three different selection of Genetic Algorithm is proposed & tested against the present liquid flow process.The proposed algorithm is developed based on the mimic genetic evolution of species that allow the consecutive generations in population to adopt their environment.Equations for Response Surface Methodology (RSM) and Analysis of Variance (ANOVA) are being used as non-linear models and these models are optimized using the proposed different selection of Genetic optimization techniques. It can be observed that the among these three different selection of Genetic Algorithm ,Rank selected GA is better than the other two selection (Tournament & Roulette wheel) in terms of the accuracy of final solutions, success rate, convergence speed, and stability.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce integral stochastic ordering of two important classes of distributions that are commonly used to fit data possessing high values of skewness and (or) kurtosis.
Abstract: ‎In this paper‎, ‎we introduce integral stochastic ordering of two‎ most important classes of distributions that are commonly used to fit data possessing high values of skewness and (or)‎ ‎kurtosis‎. ‎The first one is based on the selection distributions started by the univariate skew-normal distribution‎. ‎A broad‎, ‎flexible and newest class in this area is the scale and shape mixture of multivariate skew-normal distributions‎. ‎The second one is the general class of Normal Mean-Variance Mixture distributions‎. ‎We then derive necessary and sufficient conditions for comparing the random vectors from these two classes of distributions‎. ‎The integral orders considered here are the usual‎, ‎concordance‎, ‎supermodular‎, ‎convex‎, ‎increasing convex and directionally convex stochastic orders‎. ‎Moreover‎, ‎for bivariate random vectors‎, ‎in the sense of stop-loss and bivariate concordance stochastic orders‎, ‎the dependence strength of random portfolios is characterized in terms of order of correlations‎.

Journal ArticleDOI
TL;DR: In this article, the problem of mean square optimal estimation of linear functionals which depend on the unknown values of a periodically correlated stochastic sequence is considered, and the estimates are based on observations of the sequence with a noise.
Abstract: The problem of mean square optimal estimation of linear functionals which depend on the unknown values of a periodically correlated stochastic sequence is considered. The estimates are based on observations of the sequence with a noise. Formulas for calculation the mean square errors and the spectral characteristics of the optimal estimates of functionals are derived in the case of spectral certainty, where spectral densities of the sequences are exactly known. Formulas that determine the least favorable spectral densities and the minimax spectral characteristics are proposed in the case of spectral uncertainty, where spectral densities of the sequences are not exactly known while some classes of admissible spectral densities are specified.

Journal ArticleDOI
TL;DR: In this paper, the authors used univariate generalized autoregressive conditional heteroscedasticity (GARCH) models for modeling volatility of the BRICS (Brazil, Russia, India, China and South Africa) stock markets.
Abstract: Volatility modelling is a key factor in equity markets for risk and portfolio management. This paper focuses on the use of a univariate generalized autoregressive conditional heteroscedasticity (GARCH) models for modelling volatility of the BRICS (Brazil, Russia, India, China and South Africa) stock markets. The study extends the literature by conducting the volatility modelling under the assumptions of seven error distributions that include the normal, skewed-normal, Student’s t, skewed-Student’s t, generalized error distribution (GED), skewed-GED and the generalized hyperbolic (GHYP) distribution. It was observed that using an ARMA(1, 1)-GARCH(1, 1) model, volatilities of the Brazilian Bovespa and the Russian IMOEX markets can both be well characterized (or described) by a heavy-tailed Student’s t distribution, while the Indian NIFTY market’s volatility is best characterized by the generalized hyperbolic (GHYP) distribution. Also, the Chinese SHCOMP and South African JALSH markets’ volatilities are best described by the skew-GED and skew-Student’s t distribution, respectively. The study further observed that the persistence of volatility in the BRICS markets does not follow the same hierarchical pattern under the error distributions, except under the skew-Student’s t and GHYP distributions where the pattern is the same. Under these two assumptions, i.e. the skew-Student’s t and GHYP, in a descending hierarchical order of magnitudes, volatility with persistence is highest in the Chinese market, followed by the South African market, then the Russian, Indian and Brazilian markets, respectively. However, under each of the five non-Gaussian error distributions, the Chinese market is the most volatile, while the least volatile is the Brazilian market.

Journal ArticleDOI
TL;DR: The resolvent operator technique is used to calculate the approximate common solution of the generalized system of variational-like inclusion problems involving αβ-symmetric η-monotone mappings and a fixed point problem for nonlinear Lipchitz mappings.
Abstract: In this article, we introduce and study a generalized system of mixed variational-like inclusion problems involving αβ-symmetric η-monotone mappings. We use the resolvent operator technique to calculate the approximate common solution of the generalized system of variational-like inclusion problems involving αβ-symmetric η-monotone mappings and a fixed point problem for nonlinear Lipchitz mappings. We study strong convergence analysis of the sequences generated by proposed Mann type iterative algorithms. Moreover, we consider an altering points problem associated with a generalized system of variational-like inclusion problems. To calculate the approximate solution of our system, we proposed a parallel S-iterative algorithm and study the convergence analysis of the sequences generated by proposed parallel S-iterative algorithms by using the technique of altering points problem. The results presented in this paper may be viewed as generalizations and refinements of the results existing in the literature.

Journal ArticleDOI
TL;DR: In this paper, a flexible ranked set sampling scheme including some various existing sampling methods is proposed, which may be used to minimize the error of ranking and the cost of sampling, based on the data obtained from this scheme, the maximum likelihood estimation as well as the Fisher information are studied for the scale family of distributions.
Abstract: A flexible ranked set sampling scheme including some various existing sampling methods is proposed. This scheme may be used to minimize the error of ranking and the cost of sampling. Based on the data obtained from this scheme, the maximum likelihood estimation as well as the Fisher information are studied for the scale family of distributions. The existence and uniqueness of the maximum likelihood estimator of the scale parameter of the exponential and normal distributions are investigated. Moreover, the optimal scheme is derived via simulation and numerical computations.

Journal ArticleDOI
TL;DR: In this article, a special class of conic optimization problems, consisting of set-semidefinite programming problems, where the set K is a polyhedral convex cone, is considered.
Abstract: In this paper, we consider a special class of conic optimization problems, consisting of set-semidefinite(or K-semidefinite) programming problems, where the set K is a polyhedral convex cone. For these problems, we introduce the concept of immobile indices and study the properties of the set of normalized immobile indices and the feasible set. This study provides the main result of the paper, which is to formulate and prove the new first-order optimality conditions in the form of a criterion. The optimality conditions are explicit and do not use any constraint qualifications. For the case of a linear cost function, we reformulate the K-semidefinite problem in a regularized form and construct its dual. We show that the pair of the primal and dual regularized problems satisfies the strong duality relation which means that the duality gap is vanishing.

Journal ArticleDOI
TL;DR: In this paper, the Bayesian and non-Bayesian estimation of a two-parameter Weibull lifetime model in the presence of progressive first-failure censored data with binomial random removals is considered.
Abstract: In this paper, the Bayesian and non-Bayesian estimation of a two-parameter Weibull lifetime model in presence of progressive first-failure censored data with binomial random removals are considered. Based on the s-normal approximation to the asymptotic distribution of maximum likelihood estimators, two-sided approximate confidence intervals for the unknown parameters are constructed. Using gamma conjugate priors, several Bayes estimates and associated credible intervals are obtained relative to the squared error loss function. Proposed estimators cannot be expressed in closed forms and can be evaluated numerically by some suitable iterative procedure. A Bayesian approach is developed using Markov chain Monte Carlo techniques to generate samples from the posterior distributions and in turn computing the Bayes estimates and associated credible intervals. To analyze the performance of the proposed estimators, a Monte Carlo simulation study is conducted. Finally, a real data set is discussed for illustration purposes.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed the subclasses of the generalised hyperbolic distributions (GHDs), as appropriate models for capturing these characteristics for the crude oil and gasoline returns.
Abstract: Over the past decade, crude oil prices have risen dramatically, making the oil market very volatile and risky; hence, implementing an efficient risk management tool against market risk is crucial. Value-at-risk (VaR) has become the most common tool in this context to quantify market risk. Financial data typically have certain features such as volatility clustering, asymmetry, and heavy and semi-heavy tails, making it hard, if not impossible, to model them by using a normal distribution. In this paper, we propose the subclasses of the generalised hyperbolic distributions (GHDs), as appropriate models for capturing these characteristics for the crude oil and gasoline returns. We also introduce the new subclass of GHDs, namely normal reciprocal inverse Gaussian distribution (NRIG), in evaluating the VaR for the crude oil and gasoline market. Furthermore, VaR estimation and backtesting procedures using the Kupiec likelihood ratio test are conducted to test the extreme tails of these models. The main findings from the Kupiec likelihood test statistics suggest that the best GHD model should be chosen at various VaR levels. Thus, the final results of this research allow risk managers, financial analysts, and energy market academics to be flexible in choosing a robust risk quantification model for crude oil and gasoline returns at their specific VaR levels of interest. Particularly for NRIG, the results suggest that a better VaR estimation is provided at the long positions.



Journal ArticleDOI
TL;DR: In this article, a new formulation of the fractional Euler-Lagrange equation for variational problems with a Lagrangian is presented, and a new exact solution for a particular variational problem is obtained.
Abstract: In this paper we present advances in fractional variational problems with a Lagrangian depending on Caputo fractional and classical derivatives. New formulations of the fractional Euler-Lagrange equation are shown for the basic and isoperimetric problems, one in an integral form, and the other that depends only on the Caputo derivatives. The advantage is that Caputo derivatives are more appropriate for modeling problems than the Riemann-Liouville derivatives and makes the calculations easier to solve because, in some cases, its behavior is similar to the behavior of classical derivatives. Finally, a new exact solution for a particular variational problem is obtained.

Journal ArticleDOI
TL;DR: In this paper, a new lifetime model with four positive parameters, called the Weibull Birnbaum-Saunders distribution, was proposed, which provides great flexibility in modeling data in practice.
Abstract: A new lifetime model, with four positive parameters, called the Weibull Birnbaum-Saunders distribution is proposed. The proposed model extends the Birnbaum-Saunders distribution and provides great flexibility in modeling data in practice. Some mathematical properties of the new distribution are obtained including expansions for the cumulative and density functions, moments, generating function, mean deviations, order statistics and reliability. Estimation of the model parameters is carried out by the maximum likelihood estimation method. A simulation study is presented to show the performance of the maximum likelihood estimates of the model parameters. The flexibility of the new model is examined by applying it to two real data sets.

Journal ArticleDOI
TL;DR: In this paper, point and interval estimation of stress-strength reliability based on lower record ranked set sampling (RRSS) scheme under the proportional reversed hazard rate model are considered, and a data set has been analyzed for illustrative purposes.
Abstract: In this paper, point and interval estimation of stress-strength reliability based on lower record ranked set sampling (RRSS) scheme under the proportional reversed hazard rate model are considered. Maximum likelihood, uniformly minimum variance unbiased estimator, and Bayesian estimators of R are derived. Also, we compared this point estimators with their counterparts obtained by well-known sampling scheme in record values known as inverse sampling scheme. Various confidence intervals for the parameter R are constructed, and compared based on the simulation study. Moreover, the RRSS scheme is compared with ordinary records in case of interval estimations. We observed that our proposed point and interval estimations perform well in the estimation of R based on RRSS. We also proved that all calculations do not depend on the baseline distribution in the proportional reversed hazard rate model. Finally, a data set has been analyzed for illustrative purposes.

Journal ArticleDOI
TL;DR: This work implements a graphical display employing the biplot methodology which yields a simultaneous representation of both rows and columns of the matrix with maximum quality and interpretation of this type of biplot on Hankel related trajectory matrices is discussed from a real-world data set.
Abstract: The extraction of essential features of any real-valued time series is crucial for exploring, modeling and producing, for example, forecasts. Taking advantage of the representation of a time series data by its trajectory matrix of Hankel constructed using Singular Spectrum Analysis, as well as of its decomposition through Principal Component Analysis via Partial Least Squares, we implement a graphical display employing the biplot methodology. A diversity of types of biplots can be constructed depending on the two matrices considered in the factorization of the trajectory matrix. In this work, we discuss the called HJ-biplot which yields a simultaneous representation of both rows and columns of the matrix with maximum quality. Interpretation of this type of biplot on Hankel related trajectory matrices is discussed from a real-world data set.

Journal ArticleDOI
TL;DR: In this article, the authors provide an interpretation of rationality in game theory in which player consider the profit or loss of the opponent in addition to personal profit at the game and provide an analysis of the decision-making process with cognitive economics approach at the same time.
Abstract: In this paper, we provide an interpretation of the rationality in game theory in which player consider the profit or loss of the opponent in addition to personal profit at the game.‎ ‎‎The goal of a game analysis with two hyper-rationality players is to provide insight into real-world situations that are often more complex than a game with two rational players where the choices of strategy are only based on individual preferences. The hyper-rationality does not mean perfect rationality but an insight toward how human decision-makers behave in interactive decisions. ‎‎The findings of this research can help to enlarge our understanding of the psychological aspects of strategy choices in games and also provide an analysis of the decision-making process with cognitive economics approach at the same time.‎ ‎‎‎

Journal ArticleDOI
TL;DR: In this article, the recurrence relations satisfied by single and product moments of progressive Type-II right censored order statistics from Hjorth distribution have been obtained, and these results were used to compute the moments for all sample sizes and all censoring schemes (R1,R2,...,Rm),m ≤ n.
Abstract: In this paper some recurrence relations satisfied by single and product moments of progressive Type-II right censored order statistics from Hjorth distribution have been obtained. Then we use these results to compute the moments for all sample sizes and all censoring schemes (R1,R2,...,Rm),m ≤ n, which allow us to obtain BLUEs of location and scale parameters based on progressive type-II right censored samples.

Journal ArticleDOI
TL;DR: In this article, a new family of skew distributions is introduced by extending the alpha skew logistic distribution proposed by Hazarika-Chakraborty [9], which is called the alpha-beta skew logistics (ABSLG) distribution.
Abstract: A new family of skew distributions is introduced by extending the alpha skew logistic distribution proposed by Hazarika-Chakraborty [9]. This family of distributions is called the alpha-beta skew logistic (ABSLG) distribution.Density function, moments, skewness and kurtosis coefficients are derived. The parameters of the new family are estimated by maximum likelihood and moments methods. The performance of the obtained estimators examined via a Monte carlo simulation. Flexibility, usefulness and suitability of ABSLG is illustrated by analyzing two real data sets.

Journal ArticleDOI
TL;DR: In this article, the authors consider two forms of shrinkage estimators of the mean of a multivariate normal distribution and study the minimaxity and the limits of risks ratios of these estimators to the maximum likelihood estimator.
Abstract: In this article, we consider two forms of shrinkage estimators of the mean $\theta$ of a multivariate normal distribution $X\sim N_{p}\left(\theta, \sigma^{2}I_{p}\right)$ where $\sigma^{2}$ is unknown. We take the prior law $\theta \sim N_{p}\left(\upsilon, \tau^{2}I_{p}\right)$ and we constuct a Modified Bayes estimator $\delta_{B}^{\ast}$ and an Empirical Modified Bayes estimator $\delta_{EB}^{\ast}$. We are interested in studying the minimaxity and the limits of risks ratios of these estimators, to the maximum likelihood estimator $X$, when $n$ and $p$ tend to infinity.