# Showing papers in "Communications in Statistics-theory and Methods in 2015"

••

TL;DR: In this article, the authors provide a comprehensive treatment of general mathematical properties of Zografos-Balakrishnan-G distributions and provide an application to a real data set.

Abstract: For any continuous baseline G distribution, Zografos and Balakrishnan (2009) and Ristic and Balakrishnan (2012) proposed novel generalized gamma-generated distributions (denoted here with the prefixes “Zografos–Balakrishnan-G” and “Ristic–Balakrishnan-G”) with an extra positive parameter. They studied some mathematical properties and presented special sub-models. Here, we provide a comprehensive treatment of general mathematical properties of Zografos–Balakrishnan-G distributions. We discuss estimation of the parameters by maximum likelihood and provide an application to a real data set. We also propose bivariate generalizations.

78 citations

••

TL;DR: In this article, the authors estimate multicomponent stress strength reliability by assuming the Burr-XII distribution, and the reliability is estimated using the maximum likelihood method of estimation and results are compared using the Monte Carlo simulation for small samples.

Abstract: In this paper, we estimate multicomponent stress-strength reliability by assuming Burr-XII distribution. The research methodology adopted here is to estimate the parameter using maximum likelihood estimation. Reliability is estimated using the maximum likelihood method of estimation and results are compared using the Monte Carlo simulation for small samples. Using real data sets we illustrate the procedure clearly.

69 citations

••

TL;DR: In this article, a new stochastic ordering is introduced through the monotonicity property of reversed hazards ratio, and two well-known parametric families of distributions are proved to be ordered with respect to their parameters.

Abstract: In this paper, through the monotonicity property of reversed hazards ratio, a new stochastic ordering is introduced. Two well-known parametric families of distributions are proved to be ordered with respect to their parameters, according to the new proposed stochastic order. Other situations, where the new order is applicable, are described in details. A number of elementary and basic properties of the order, for example, preservation under increasing transformations are derived. Stochastic comparisons of the parallel systems are made using the new stochastic order. In parallel with the obtained results, some examples will explain the concepts.

43 citations

••

TL;DR: In this article, the authors proposed two new six-parameter distributions, called the McDonald Burr III and McDonald Burr XII models, which contain some recently published distributions as special models and provide a comprehensive description of some of their mathematical properties with the hope that they will attract wider applications in lifetime analysis.

Abstract: We propose two new six-parameter distributions, called the McDonald Burr III and McDonald Burr XII models, which contain some recently published distributions as special models. We provide a comprehensive description of some of their mathematical properties with the hope that they will attract wider applications in lifetime analysis. The potentiality of both models to analyze positive data is illustrated by means of two real data sets.

35 citations

••

TL;DR: In this paper, a new lifetime distribution was proposed by using a quadratic rank transmutation map in order to add a new parameter to the log-logistic distribution, which is illustrated on a polled Tabapua race time up to first calving data.

Abstract: In this paper, we propose a new lifetime distribution by using a quadratic rank transmutation map in order to add a new parameter to the log-logistic distribution. We provide a comprehensive description of the properties of the proposed distribution along with its reliability study. The usefulness of the transmuted log-logistic distribution for modeling reliability data is illustrated on a polled Tabapua race time up to first calving data.

35 citations

••

TL;DR: In this article, an extension of Chen's (2000) family of distributions given by Lehman alternatives (see Gupta et al., 1998) is presented, which is shown to present another alternative to the generalized Weibull and exponentiated Wibull families for modeling survival data.

Abstract: In this article we introduce an extension of Chen’s (2000) family of distributions given by Lehman alternatives (see Gupta et al., 1998) that is shown to present another alternative to the generalized Weibull and exponentiated Weibull families for modeling survival data. The extension proposed here can be seen as the extension to the Chen’s distribution as the exponentiated Weibull is to the Weibull. A detailed analysis of the density and hazard shapes is carried out. The new model is also seen to fit well to the flood data used in fitting the exponentiated Weibull model in Mudholkar and Hutson (1996).

33 citations

••

TL;DR: In this paper, a new lifetime distribution is proposed and studied, where the Harris extended exponential is obtained from a mixture of the exponential and Harris distributions, which arises from a branching process, and several structural properties of the new distribution are discussed, including moments, generating function and order statistics.

Abstract: A new lifetime distribution is proposed and studied. The Harris extended exponential is obtained from a mixture of the exponential and Harris distributions, which arises from a branching process. Several structural properties of the new distribution are discussed, including moments, generating function and order statistics. The new distribution can model data with increasing or decreasing failure rate. The shape of the hazard rate function is controlled by one of the added parameters in an uncomplicated manner. An application to a real dataset illustrates the usefulness of the new distribution.

33 citations

••

TL;DR: In this article, the authors proposed different kinds of estimators based on the Burr type XII model, including uniformly minimum variance unbiased (UMVU) and Bayes estimators.

Abstract: In this paper some different sorts of estimators are proposed based on record breaking observations in the Burr type XII model. We define Bayes as well as empirical Bayes preliminary test estimators in the same fashion as in the ordinary preliminary test estimator using relevant combinations of uniformly minimum variance unbiased (UMVU) and Bayes estimators. Exact and asymptotic bias and mean square error (MSE) expressions for the proposed estimators are derived under two different conditions of knowing the shape parameters. We compare the MSEs and obtain the confidence interval for the parameter of interest in which the preliminary test type estimators outperform the UMVU, Bayes and empirical Bayes estimators. An application of the ordinary preliminary test estimator is also considered. We conclude this approach by a useful discussion for practical purposes and a summary.

31 citations

••

TL;DR: In this paper, the generalized inverse gamma distribution (GIG) is proposed, which is based on the exact form of generalized gamma function of Kobayashi (1991), which is useful in many problems of diffraction theory and corrosion problems in new machines.

Abstract: In this article, we introduce a new reliability model of inverse gamma distribution referred to as the generalized inverse gamma distribution (GIG). A generalization of inverse gamma distribution is defined based on the exact form of generalized gamma function of Kobayashi (1991). This function is useful in many problems of diffraction theory and corrosion problems in new machines. The new distribution has a number of lifetime special sub-models. For this model, some of its statistical properties are studied. The method of maximum likelihood is used for estimating the model parameters and the observed information matrix is derived. We also demonstrate the usefulness of this distribution on a real data set.

31 citations

••

TL;DR: In this paper, a procedure to test independence of a contingency table, based on a multinomial simulation, is developed, where the original probability table is decomposed into orthogonal tables, independent and interaction tables.

Abstract: Frequently, contingency tables are generated in a multinomial sampling Multinomial probabilities are then organized in a table assigning probabilities to each cell A probability table can be viewed as an element in the simplex The Aitchison geometry of the simplex identifies independent probability tables as a linear subspace An important consequence is that, given a probability table, the nearest independent table is obtained by orthogonal projection onto the independent subspace The nearest independent table is identified as that obtained by the product of geometric marginals, which do not coincide with the standard marginals, except in the independent case The original probability table is decomposed into orthogonal tables, the independent and the interaction tables The underlying model is log-linear, and a procedure to test independence of a contingency table, based on a multinomial simulation, is developed Its performance is studied on an illustrative example

29 citations

••

TL;DR: In this paper, the estimation of the stress-strength parameter R = P(Y < X), when X and Y are two independent weighted Lindley random variables with a common shape parameter, is dealt with.

Abstract: This article deals with the estimation of the stress-strength parameter R = P(Y < X), when X and Y are two independent weighted Lindley random variables with a common shape parameter. The MLEs can be obtained by maximizing the profile log-likelihood function in one dimension. The asymptotic distribution of the MLEs are also obtained, and they have been used to construct the asymptotic confidence interval of R. Bootstrap confidence intervals are also proposed. Monte Carlo simulations are performed to verify the effectiveness of the different estimation methods, and data analysis has been performed for illustrative purposes.

••

TL;DR: In this article, it was shown that under some conditions, one largest order statistic Xλn: n is smaller than another one Xθn : n according to likelihood ratio ordering.

Abstract: Let be independent non negative random variables with , i = 1, …, n, where λi > 0, i = 1, …, n and F is an absolutely continuous distribution. It is shown that, under some conditions, one largest order statistic Xλn: n is smaller than another one Xθn: n according to likelihood ratio ordering. Furthermore, we apply these results when F is a generalized gamma distribution which includes Weibull, gamma and exponential random variables as special cases.

••

TL;DR: This paper introduced cube models with covariates, a class of discrete mixture distributions able to take uncertainty and overdispersion of ordinal data into account, and derived the observed variance-covariance matrix of this model, a necessary step for the asymptotic inference about estimated parameters.

Abstract: We introduce cube models with covariates, a class of discrete mixture distributions able to take uncertainty and overdispersion of ordinal data into account. The main result of the paper concerns the analytical derivation of the observed variance–covariance matrix of this model, a necessary step for the asymptotic inference about estimated parameters and model validation. We emphasize some computational aspects of the procedure and discuss the usefulness of the approach on a real case study.

••

TL;DR: This work proposes modeling interval time series with space–time autoregressive models and, based on the process appropriate for the interval bounds, derives the model for the intervals’ center and radius.

Abstract: We consider interval-valued time series, that is, series resulting from collecting real intervals as an ordered sequence through time. Since the lower and upper bounds of the observed intervals at each time point are in fact values of the same variable, they are naturally related. We propose modeling interval time series with space–time autoregressive models and, based on the process appropriate for the interval bounds, we derive the model for the intervals’ center and radius. A simulation study and an application with data of daily wind speed at different meteorological stations in Ireland illustrate that the proposed approach is appropriate and useful.

••

TL;DR: In this article, an exponential geometric distribution with two parameters q(0 0) is proposed as a new generalization of the geometric distribution by employing the techniques of Mudholkar and Srivastava (1993).

Abstract: Exponentiated geometric distribution with two parameters q(0 0) is proposed as a new generalization of the geometric distribution by employing the techniques of Mudholkar and Srivastava (1993). A few realistics basis where the proposed distribution may arise naturally are discussed, its distributional and reliability properties are investigated. Parameter estimation is discussed. Application in discrete failure time data modeling is illustrated with real life data. The suitability of the proposed distribution in empirical modeling of other count data is investigated by conducting comparative data fitting experiments with over and under dispersed data sets.

••

TL;DR: In this paper, the authors deal with inference for the stress strength reliability R = P(Y < X) when X and Y are two independent two-parameter bathtub-shape lifetime distributions with different scale parameters, but having the same shape parameter.

Abstract: Based on progressively Type-II censored samples, this article deals with inference for the stress-strength reliability R = P(Y < X) when X and Y are two independent two-parameter bathtub-shape lifetime distributions with different scale parameters, but having the same shape parameter. Different methods for estimating the reliability are applied. The maximum likelihood estimate of R is derived. Also, its asymptotic distribution is used to construct an asymptotic confidence interval for R. Assuming that the shape parameter is known, the maximum likelihood estimator of R is obtained. Based on the exact distribution of the maximum likelihood estimator of R an exact confidence interval of that has been obtained. The uniformly minimum variance unbiased estimator are calculated for R. Bayes estimate of R and the associated credible interval are also got under the assumption of independent gamma priors. Monte Carlo simulations are performed to compare the performances of the proposed estimators. One data analysi...

••

TL;DR: In this article, a multivariate generalized Poisson regression model for count data with any type of dispersion is proposed. But the model allows for both positive and negative correlation between any pair of the response variables.

Abstract: A multivariate generalized Poisson regression model based on the multivariate generalized Poisson distribution is defined and studied The regression model can be used to describe a count data with any type of dispersion The model allows for both positive and negative correlation between any pair of the response variables The parameters of the regression model are estimated by using the maximum likelihood method Some test statistics are discussed, and two numerical data sets are used to illustrate the applications of the multivariate count data regression model

••

TL;DR: The main purpose of this paper is to describe how to do a Tucker3 analysis of compositional data, and to show the relationships between the loading matrices when different preprocessing procedures are used.

Abstract: For the exploratory analysis of three-way data, the Tucker3 is one of the most applied models to study three-way arrays when the data are quadrilinear. When the data consist of vectors of positive values summing to a unit, as in the case of compositional data, this model should consider the specific problems that compositional data analysis brings. The main purpose of this paper is to describe how to do a Tucker3 analysis of compositional data, and to show the relationships between the loading matrices when different preprocessing procedures are used.

••

TL;DR: In this article, the authors studied the allocation of independent redundancies with a common life distribution to k-out-of-n systems of independent components with non identical life distributions, and proved that the optimal policy is majorized by all other policies when the system components are stochastically ordered.

Abstract: This paper studies the allocation of independent redundancies with a common life distribution to k-out-of-n systems of independent components with non identical life distributions A sufficient condition is found for allocating more active redundancies to the weaker component to gain a larger lifetime for k-out-of-n systems, and assigning more standby redundancies to the weaker (stronger) components is proved to yield larger lifetime for series (parallel) systems in the sense of the increasing concave (convex) order Also, the optimal policy is proved to be majorized by all other policies when the system’s components are stochastically ordered

••

TL;DR: In this article, a new discrete distribution related to the generalized gamma distribution (Stacy, 1962) is derived from a statistical mechanical setup, which can be seen as generalization of two-parameter discrete gamma distribution and encompasses discrete version of many important continuous distributions.

Abstract: In this article, a new discrete distribution related to the generalized gamma distribution (Stacy, 1962) is derived from a statistical mechanical setup. This new distribution can be seen as generalization of two-parameter discrete gamma distribution (Chakraborty and Chakravarty, 2012) and encompasses discrete version of many important continuous distributions. Some basic distributional and reliability properties, parameter estimation by different methods, and their comparative performances using simulation are investigated. Two-real life data sets are considered for data modeling and likelihood ratio test for illustrating the advantages of the proposed distribution over two-parameter discrete gamma distribution.

••

TL;DR: In this paper, an adaptive chart which combines variable sample sizes, sampling intervals, and double sampling features called CVSSIDS control chart is proposed, which uses three different sample sizes and two different sampling intervals.

Abstract: In this article, an adaptive chart which combines variable sample sizes, sampling intervals, and double sampling features called CVSSIDS control chart is proposed, which uses three different sample sizes, two different sampling intervals, two warning limits, and two different control limits. To evaluate the efficiency of the proposed scheme versus different schemes, average time to signal, average number of samples to signal and average number of observations to signal are utilized. In addition, the minimum ATS values of the proposed chart are computed. Finally, the proposed chart is compared to standard Shewhart (SS) chart and several adaptive control charts.

••

TL;DR: In this article, a bivariate Gaussian-Weibull distribution and the associated pseudo-truncated Weibull was proposed to model the simultaneous behavior of stiffness and bending strength of wood.

Abstract: Two important wood properties are stiffness (modulus of elasticity or MOE) and bending strength (modulus of rupture or MOR). In the past, MOE has often been modeled as a Gaussian and MOR as a lognormal or a two or three parameter Weibull. It is well known that MOE and MOR are positively correlated. To model the simultaneous behavior of MOE and MOR for the purposes of wood system reliability calculations, we introduce a bivariate Gaussian–Weibull distribution and the associated pseudo-truncated Weibull. We use asymptotically efficient likelihood methods to obtain an estimator of the parameter vector of the bivariate Gaussian–Weibull, and then obtain the asymptotic distribution of this estimator.

••

TL;DR: In this paper, the authors derive permutation tests within the sufficiency and conditionality principles of inference, without formal proofs that can be found in the book by Pesarin and Salmaso (2010).

Abstract: In literature, permutation tests are mostly derived by means of heuristic arguments (Edgington and Onghena, 2007; Good, 2005). In this paper, we derive them within the sufficiency and conditionality principles of inference. Most important of their properties are exposed without formal proofs that can be found in the book by Pesarin and Salmaso (2010).

••

TL;DR: In this paper, a stochastic individual data model is considered, which accommodates occurrence times, reporting, and settlement delays and severity of every individual claims, and gives rise to a model for the corresponding aggregate data under which classical chain ladder and Bornhuetter-Ferguson algorithms apply.

Abstract: In this paper, a stochastic individual data model is considered. It accommodates occurrence times, reporting, and settlement delays and severity of every individual claims. This formulation gives rise to a model for the corresponding aggregate data under which classical chain ladder and Bornhuetter–Ferguson algorithms apply. A claims reserving algorithm is developed under this individual data model and comparisons of its performance with chain ladder and Bornhuetter–Ferguson algorithms are made to reveal the effects of using individual data to instead aggregate data. The research findings indicate a remarkable promotion in accuracy of loss reserving, especially when the claims amounts are not too heavy-tailed.

••

TL;DR: In this paper, a variance estimator of the M-quantile regression coefficients based on the sandwich approach is proposed, which is applied in the small area estimation context for the estimation of the mean squared error of an estimator for small area means.

Abstract: M-quantile regression is defined as a “quantile-like” generalization of robust regression based on influence functions. This article outlines asymptotic properties for the M-quantile regression coefficients estimators in the case of i.i.d. data with stochastic regressors, paying attention to adjustments due to the first-step scale estimation. A variance estimator of the M-quantile regression coefficients based on the sandwich approach is proposed. Empirical results show that this estimator appears to perform well under different simulated scenarios. The sandwich estimator is applied in the small area estimation context for the estimation of the mean squared error of an estimator for the small area means. The results obtained improve previous findings, especially in the case of heteroskedastic data.

••

TL;DR: In this paper, the authors extend the multiplicative error model to capture the effect of other markets in their structure, in order to study the spillover or the contagion phenomena, under the assumption that the conditional mean of the volatility can be decomposed into the sum of one component representing the proper volatility of the time series analyzed and other components, each representing the volatility transmitted from one other market.

Abstract: Recent statistical models for the analysis of volatility in financial markets serve the purpose of incorporating the effect of other markets in their structure, in order to study the spillover or the contagion phenomena. Extending the Multiplicative Error Model we are able to capture these characteristics, under the assumption that the conditional mean of the volatility can be decomposed into the sum of one component representing the proper volatility of the time series analyzed, and other components, each representing the volatility transmitted from one other market. Each component follows a proper dynamics with elements that can be usefully interpreted. This particular decomposition allows to establish, each time, the contribution brought by each individual market to the global volatility of the market object of the analysis. We experiment this model with four stock indices.

••

TL;DR: In this article, the authors introduced several new folded distributions and derived statistical properties of each distribution for the Norwegian fire claim data, and applied these distributions to the Norwegian Fire Claim Data.

Abstract: Folded distributions are useful models in statistics. However, not much is known beyond the folded normal distribution introduced in the 1960s. Here, we introduce several new folded distributions. Statistical properties of each distribution are derived. Applications are provided to the Norwegian fire claim data.

••

TL;DR: In this article, a method of bias adjustment which minimizes the asymptotic mean square error is presented for an estimator typically given by maximum likelihood, which can be done without population values.

Abstract: A method of bias adjustment which minimizes the asymptotic mean square error is presented for an estimator typically given by maximum likelihood. Generally, this adjustment includes unknown population values. However, in some examples, the adjustment can be done without population values. In the case of a logit, a reasonable fixed value for the adjustment is found, which gives the asymptotic mean square error smaller than those of the asymptotically unbiased estimator and the maximum likelihood estimator. The weighted-score method, which yields directly the estimator with the minimized asymptotic mean square error, is also given.

••

TL;DR: In this article, a flexible cure rate survival model was developed by Rodrigues et al. (2009a) by assuming the competing cause variable to follow the Conway-Maxwell Poisson distribution.

Abstract: A flexible cure rate survival model was developed by Rodrigues et al. (2009a) by assuming the competing cause variable to follow the Conway-Maxwell Poisson distribution. This model includes as special cases some of the well-known cure rate models. As the data obtained from cancer clinical trials are often right censored, the EM algorithm can be efficiently used to estimate the model parameters based on right censored data. In this paper, we consider the cure rate model developed by Rodrigues et al. (2009a) and by assuming the time-to-event to follow the gamma distribution, we develop exact likelihood inference based on the EM algorithm. An extensive Monte Carlo simulation study is performed to examine the method of inference developed. Model discrimination between different cure rate models is carried out by means of likelihood ratio test and Akaike and Bayesian information criteria. Finally, the proposed methodology is illustrated with a cutaneous melanoma data.

••

TL;DR: In this paper, a Phase I design structure of -Chart, namely Bayesian -chart, based on Bayesian (posterior distribution) framework assuming the normality of the quality characteristic to incorporate parameter uncertainty was developed.

Abstract: This article develops a Phase I design structure of -Chart, namely Bayesian -Chart, based on Bayesian (posterior distribution) framework assuming the normality of the quality characteristic to incorporate parameter uncertainty. Our approach consists of two stages: (i) construction of the control limits for -Chart based on posterior distribution of unknown mean μ and (ii) evaluation of the performance of the proposed design structure. The proposed design structure of -Chart is compared with the frequents design structure of - Chart in terms of (i) width of the control region and (ii) power of detecting a shift in the location parameter of the process. It has been observed that the proposed design structure of -Chart is performs better than the usual design structure to detecting shifts in the parameter of the process when the prior mean is close to the unknown target value.