scispace - formally typeset
Search or ask a question

Showing papers in "Communications in Statistics-theory and Methods in 2013"


Journal ArticleDOI
TL;DR: In this article, the estimation of the stress-strength parameter R = P(Y < X) when X and Y are independent Lindley random variables with different shape parameters is dealt with.
Abstract: This article deals with the estimation of the stress-strength parameter R = P(Y < X) when X and Y are independent Lindley random variables with different shape parameters. The uniformly minimum variance unbiased estimator has explicit expression, however, its exact or asymptotic distribution is very difficult to obtain. The maximum likelihood estimator of the unknown parameter can also be obtained in explicit form. We obtain the asymptotic distribution of the maximum likelihood estimator and it can be used to construct confidence interval of R. Different parametric bootstrap confidence intervals are also proposed. Bayes estimator and the associated credible interval based on independent gamma priors on the unknown parameters are obtained using Monte Carlo methods. Different methods are compared using simulations and one data analysis has been performed for illustrative purposes.

140 citations


Journal ArticleDOI
TL;DR: In this paper, a new distribution, namely, Weibull-Pareto distribution, is defined and studied, and various properties of the distribution are obtained, including moments, limiting behavior, and Shannon's entropy.
Abstract: In this article, a new distribution, namely, Weibull-Pareto distribution is defined and studied. Various properties of the Weibull-Pareto distribution are obtained. The distribution is found to be unimodal and the shape of the distribution can be skewed to the right or skewed to the left. Results for moments, limiting behavior, and Shannon's entropy are provided. The method of modified maximum likelihood estimation is proposed for estimating the model parameters. Several real data sets are used to illustrate the applications of Weibull-Pareto distribution.

117 citations


Journal ArticleDOI
TL;DR: In this article, a new distribution, namely Marshall-Olkin Frechet (MOF), was introduced and the unknown parameters of the new distribution were estimated using the maximum likelihood estimation method adopting three different iterative procedures.
Abstract: We introduce a new distribution, namely Marshall–Olkin Frechet distribution. The probability density and hazard rate functions are derived and their shape properties are considered. Expressions for the nth moments are given. Various results with respect to quantiles, Renyi entropy and order statistics are obtained. The unknown parameters of the new distribution are estimated using the maximum likelihood estimation method adopting three different iterative procedures. The model is applied on a real data set on survival times. [Supplementary materials are available for this article. Go to the publisher's online edition of Communications in Statistics—Theory and Methods for the following free supplemental resource: A file that will allow the random variables from MOF distribution to be generated.]

81 citations


Journal ArticleDOI
TL;DR: In this article, the second-order bias of the Lomax (Pareto II) distribution is analyzed for finite sample sizes, and an analytic bias correction is derived, which reduces the percentage bias of these estimators by one or two orders of magnitude.
Abstract: The Lomax (Pareto II) distribution has found wide application in a variety of fields. We analyze the second-order bias of the maximum likelihood estimators of its parameters for finite sample sizes, and show that this bias is positive. We derive an analytic bias correction which reduces the percentage bias of these estimators by one or two orders of magnitude, while simultaneously reducing relative mean squared error. Our simulations show that this performance is very similar to that of a parametric bootstrap correction based on a linear bias function. Three examples with actual data illustrate the application of our bias correction.

77 citations


Journal ArticleDOI
TL;DR: This work provides various (equivalent) expressions for the bivariate normal copula, its Gini's gamma is computed, and improved bounds and approximations on its diagonal are derived.
Abstract: We collect well-known and less-known facts about the bivariate normal distribution and translate them into copula language. In addition, we provide various (equivalent) expressions for the bivariate normal copula, we compute its Gini's gamma, and we derive improved bounds and approximations on its diagonal.

71 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider a class of prior distributions indexed by a parameter quantifying smoothness and show that the corresponding posterior distributions contract around the true parameter at a rate that depends on the smoothness of the true initial condition and the smoothing and scale of the prior.
Abstract: We study a Bayesian approach to recovering the initial condition for the heat equation from noisy observations of the solution at a later time. We consider a class of prior distributions indexed by a parameter quantifying “smoothness” and show that the corresponding posterior distributions contract around the true parameter at a rate that depends on the smoothness of the true initial condition and the smoothness and scale of the prior. Correct combinations of these characteristics lead to the optimal minimax rate. One type of priors leads to a rate-adaptive Bayesian procedure. The frequentist coverage of credible sets is shown to depend on the combination of the prior and true parameter as well, with smoother priors leading to zero coverage and rougher priors to (extremely) conservative results. In the latter case, credible sets are much larger than frequentist confidence sets, in that the ratio of diameters diverges to infinity. The results are numerically illustrated by a simulated data example.

58 citations


Journal ArticleDOI
TL;DR: This work proposes and applies a new concept of depth measure for multivariate functional data, which allows the application of outlier detection, boxplots construction, and nonparametric tests also in this more general framework.
Abstract: In this article, we address the problem of mining and analyzing multivariate functional data. That is, data where each observation is a set of possibly correlated functions. Complex data of this kind is more and more common in many research fields, particularly in the biomedical context. In this work, we propose and apply a new concept of depth measure for multivariate functional data. With this new depth measure it is possible to generalize robust statistics, such as the median, to the multivariate functional framework, which in turn allows the application of outlier detection, boxplots construction, and nonparametric tests also in this more general framework. We present an application to Electrocardiographic (ECG) signals.

54 citations


Journal ArticleDOI
TL;DR: In this paper, a five-parameter Beta-Dagum distribution from which moments, hazard and entropy, and reliability measures are derived are then derived, and a simulation study is carried out which shows the good performance of maximum likelihood estimators for finite samples.
Abstract: This article introduces a five-parameter Beta-Dagum distribution from which moments, hazard and entropy, and reliability measures are then derived. These properties show the high flexibility of the said distribution. The maximum likelihood estimators of the Beta-Dagum parameters are examined and the expected Fisher information matrix provided. Next, a simulation study is carried out which shows the good performance of maximum likelihood estimators for finite samples. Finally, the usefulness of the new distribution is illustrated through real data sets.

49 citations


Journal ArticleDOI
TL;DR: In this paper, the estimation of R = P(Y < X) when X and Y are two independent Weibull distributions with different shape parameters, but having the same scale parameter is considered.
Abstract: Based on progressively Type II censored samples, we consider the estimation of R = P(Y < X) when X and Y are two independent Weibull distributions with different shape parameters, but having the same scale parameter. The maximum likelihood estimator, approximate maximum likelihood estimator, and Bayes estimator of R are obtained. Based on the asymptotic distribution of R, the confidence interval of R are obtained. Two bootstrap confidence intervals are also proposed. Analysis of a real data set is given for illustrative purposes. Monte Carlo simulations are also performed to compare the different proposed methods.

47 citations


Journal ArticleDOI
TL;DR: This work considers a bivariate INAR (1) (BINAR(1)) process where cross-correlation is introduced through the use of copulas for the specification of the joint distribution of the innovations, mainly on the parametric case that arises under the assumption of Poisson marginals.
Abstract: Multivariate count time series data occur in many different disciplines. The class of INteger-valued AutoRegressive (INAR) processes has the great advantage to consider explicitly both the discreteness and autocorrelation characterizing this type of data. Moreover, extensions of the simple INAR(1) model to the multi-dimensional space make it possible to model more than one series simultaneously. However, existing models do not offer great flexibility for dependence modelling, allowing only for positive correlation. In this work, we consider a bivariate INAR(1) (BINAR(1)) process where cross-correlation is introduced through the use of copulas for the specification of the joint distribution of the innovations. We mainly emphasize on the parametric case that arises under the assumption of Poisson marginals. Other marginal distributions are also considered. A short application on a bivariate financial count series illustrates the model.

42 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the complete convergence for weighted sums of extended negatively dependent random variables and row sums of arrays of rowwise extended negativelydependent random variables, and applied two methods to prove the results.
Abstract: In this article, we study the complete convergence for weighted sums of extended negatively dependent random variables and row sums of arrays of rowwise extended negatively dependent random variables. We apply two methods to prove the results: the first of is based on exponential bounds and second is based on the generalization of the classical moment inequality for extended negatively dependent random variables.

Journal ArticleDOI
TL;DR: In this article, a new generalization of the skew-normal distribution introduced by Azzalini (1985) is introduced, called the Beta skew normal (BSN), which is a special case of the Beta generated distribution.
Abstract: We consider a new generalization of the skew-normal distribution introduced by Azzalini (1985). We denote this distribution Beta skew-normal (BSN) since it is a special case of the Beta generated distribution (Jones, 2004). Some properties of the BSN are studied. We pay attention to some generalizations of the skew-normal distribution (Bahrami et al., 2009; Sharafi and Behboodian, 2008; Yadegari et al., 2008) and to their relations with the BSN.

Journal ArticleDOI
TL;DR: In this article, the authors give chi-squared goodness-of-fit tests for parametric regression models such as accelerated failure time, proportional hazards, generalized proportional hazard, frailty models, transformation models, and models with cross-effects of survival functions.
Abstract: We give chi-squared goodness-of fit tests for parametric regression models such as accelerated failure time, proportional hazards, generalized proportional hazards, frailty models, transformation models, and models with cross-effects of survival functions. Random right censored data are used. Choice of random grouping intervals as data functions is considered.

Journal ArticleDOI
TL;DR: The authors introduced a generalization of the Conway-Maxwell-Poisson regression model that allows for group-level dispersion to detect and model such a mixture of populations with different dispersion levels.
Abstract: Poisson regression is the most well-known method for modeling count data. When data display over-dispersion, thereby violating the underlying equi-dispersion assumption of Poisson regression, the common solution is to use negative-binomial regression. We show, however, that count data that appear to be equi- or over-dispersed may actually stem from a mixture of populations with different dispersion levels. To detect and model such a mixture, we introduce a generalization of the Conway-Maxwell-Poisson (COM-Poisson) regression model that allows for group-level dispersion. We illustrate mixed dispersion effects and the proposed methodology via semi-authentic data.

Journal ArticleDOI
TL;DR: In this paper, different methods of specifying the ridge parameter k were proposed and evaluated in terms of Mean Square Error (MSE) by simulation techniques and compared with other ridge-type estimators evaluated elsewhere.
Abstract: Ridge regression is a variant of ordinary multiple linear regression whose goal is to circumvent the problem of predictors collinearity. It gives up the Ordinary Least Squares (OLS) estimator as a method for estimating the parameters [] of the multiple linear regression model [] . Different methods of specifying the ridge parameter k were proposed and evaluated in terms of Mean Square Error (MSE) by simulation techniques. Comparison is made with other ridge-type estimators evaluated elsewhere. The new estimators of the ridge parameters are shown to have very good MSE properties compared with the other estimators of the ridge parameter and the OLS estimator. Based on our results from the simulation study, we may recommend the new ridge parameters to practitioners.

Journal ArticleDOI
TL;DR: In this paper, a generalization of the inverse Weibull distribution, referred to as the Beta Inverse-Weibull (BIW) distribution, was proposed. But the distribution is unimodal.
Abstract: The inverse Weibull distribution is one of the widely applied distribution for problems in reliability theory. In this article, we introduce a generalization—referred to as the Beta Inverse-Weibull distribution—generated from the logit of a beta random variable. We provide a comprehensive treatment of the mathematical properties of the Beta Inverse-Weibull distribution. The shapes of the corresponding probability density function and the hazard rate function have been obtained and graphical illustrations have been given. The distribution is found to be unimodal. Results for the non central moments are obtained. The relationship between the parameters and the mean, variance, skewness, and kurtosis are provided. The method of maximum likelihood is proposed for estimating the model parameters. We hope that this generalization will attract wider applicability to the problems in reliability theory and mechanical engineering.

Journal ArticleDOI
TL;DR: In this article, maximum likelihood estimator (MLE) as well as Bayes estimator of traffic intensity in an M/M/1/∞ queueing model in equilibrium based on number of customers present in the queue at successive departure epochs have been worked out.
Abstract: In this article, maximum likelihood estimator (MLE) as well as Bayes estimator of traffic intensity (ρ) in an M/M/1/∞ queueing model in equilibrium based on number of customers present in the queue at successive departure epochs have been worked out. Estimates of some functions of ρ which provide measures of effectiveness of the queue have also been derived. A comprehensive simulation study starting with the transition probability matrix has been carried out in the last section.

Journal ArticleDOI
TL;DR: In this article, a class of estimators for population mean is defined and asymptotic expressions of the bias and mean squared error of the proposed estimators were obtained, along with its mean-squared error formula.
Abstract: This article addresses the problem of estimating the population mean in the presence of auxiliary information. A class of estimators for population mean is defined. Asymptotic expressions of the bias and mean squared error of the proposed class of estimators were obtained. Asymptotic optimum estimator (AOE) in the proposed class of estimators was identified along with its mean squared error formula. It has been shown that the proposed class of estimators is more efficient than the usual regression estimator and Khoshnevisan et al. (2007), Singh et al. (2007), and Koyuncu and Kadilar (2009a) classes of estimators.

Journal ArticleDOI
TL;DR: In this paper, a new Liu-type estimator which includes the ordinary least squares estimator (OLS), OLS, ORR, Liu estimator(LE), (k−−d) class estimator, Principal Component Regression (PCR), (r−−k) class estimation, and (r −−k)-class estimator was introduced.
Abstract: In this article, we introduced a new Liu-type estimator which includes the ordinary least squares estimator (OLS), ordinary ridge regression estimator (ORR), Liu estimator (LE), (k − d) class estimator, principal components regression (PCR) estimator, (r − d) class estimator, and (r − k) class estimator. Under some conditions, the performance of the proposed estimator is superior to the other estimators by using the scalar mean squares error criterion. A simulation study has been conducted to compare the performance of the estimators. Finally, a numerical example has been analyzed to illustrate the theoretical results of the article.

Journal ArticleDOI
TL;DR: In this article, a ratio-cum-difference type class of estimators for population variance has been suggested with its properties under large sample approximation under a simple random sampling approach.
Abstract: This article addresses the problem of estimating of finite population variance using auxiliary information in simple random sampling. A ratio-cum-difference type class of estimators for population variance has been suggested with its properties under large sample approximation. It has been shown that the suggested class of estimators is more efficient than usual unbiased, difference, Das and Tripathi (1978), Isaki (1983), Singh et al. (1988), Kadilar and Cingi (2006), and other estimators/classes of estimators. In addition, we support this theoretical result with the aid of a empirical study.

Journal ArticleDOI
TL;DR: In this paper, the authors developed an optimization model to determine the optimum values of the thresholds such that constraints on the probability of Type I and Type II errors are satisfied, where the expected total cost of the inspection problem containing the costs of acceptance, rejection, and inspection was derived.
Abstract: In an acceptance-sampling plan, where items of an incoming batch of products are inspected one by one, if the number of conforming items between successive non conforming items falls below a lower control threshold, the batch is rejected. If it falls above an upper control threshold, the batch is accepted, and if it lies within the thresholds then the process of inspecting the items continues. The purpose of this article is to develop an optimization model to determine the optimum values of the thresholds such that constraints on the probability of Type I and Type II errors are satisfied. This article starts by developing a Markovian model to derive the expected total cost of the inspection problem containing the costs of acceptance, rejection, and inspection. Then, the optimum values of the thresholds are selected in order to minimize the expected cost. To demonstrate the application of the proposed methodology, perform sensitivity analysis, and compare the performance of the proposed procedure to the on...

Journal ArticleDOI
TL;DR: Various dissimilarity measures for histogram data are introduced, including extensions to the Gowda-Diday and Ichino-Yaguchi measures for interval data, along with extensions of some DeCarvalho measures for cluster analysis.
Abstract: Contemporary datasets can be immense and complex in nature. Thus, summarizing and extracting information frequently precedes any analysis. The summarizing techniques are many and varied and driven by underlying scientific questions of interest. One type of resulting datasets contains so-called histogram-valued observations. While such datasets are becoming more and more pervasive, methodologies to analyse them are still very inadequate. One area of interest falls under the rubric of cluster analysis. Unfortunately, to date, no dis/similarity or distance measures that are readily computable exist for multivariate histogram-valued data. To redress that problem, the present article introduces various dissimilarity measures for histogram data. In particular, extensions to the Gowda-Diday and Ichino-Yaguchi measures for interval data are introduced, along with extensions of some DeCarvalho measures. In addition, a cumulative distribution measure is developed for histograms. These new measures are illustrated f...

Journal ArticleDOI
TL;DR: In this paper, it was shown that the confidence intervals based on the FMA estimator suggested by Hjort and Claeskens (2003) are asymptotically equivalent to that obtained from the full model under both parametric and the varying-coefficient partially linear models.
Abstract: An important contribution to the literature on frequentist model averaging (FMA) is the work of Hjort and Claeskens (2003), who developed an asymptotic theory for frequentist model averaging in parametric models based on a local mis-specification framework. They also proposed a simple method for constructing confidence intervals of the unknown parameters. This article shows that the confidence intervals based on the FMA estimator suggested by Hjort and Claeskens (2003) are asymptotically equivalent to that obtained from the full model under both parametric and the varying-coefficient partially linear models. Thus, as long as interval estimation rather than point estimation is concerned, the confidence interval based on the full model already fulfills the objective and model averaging provides no additional useful information.

Journal ArticleDOI
TL;DR: In this article, the authors proposed two improved family of estimators for estimating the finite population mean in simple random sampling (SRS) and stratified random sampling(S t RS), which always perform better than a family of ratio estimators suggested by Khoshnevisan et al.
Abstract: This article proposes two improved family of estimators for estimating the finite population mean in simple random sampling (SRS) and stratified random sampling (S t RS). The proposed estimators always perform better than a family of ratio estimators suggested by Khoshnevisan et al. (2007) in SRS and Koyuncu and Kadilar (2009a) in S t RS. They also perform better than the ratio estimator given by Gupta and Shabbir (2008) in SRS and Koyuncu and Kadilar (2010) and Shabbir and Gupta (2011) in S t RS. The expressions for bias and mean squared error (MSE) of considered estimators are obtained. The results are illustrated by real data sets.

Journal ArticleDOI
Tomonari Sei1
TL;DR: In this article, the Jacobian determinant of a gradient map is shown to be log-concave with respect to a convex combination of the potential functions when the underlying manifold is the sphere and the cost function is the distance squared.
Abstract: In the field of optimal transport theory, an optimal map is known to be a gradient map of a potential function satisfying cost-convexity. In this article, the Jacobian determinant of a gradient map is shown to be log-concave with respect to a convex combination of the potential functions when the underlying manifold is the sphere and the cost function is the distance squared. As an application to statistics, a new family of probability densities on the sphere is defined in terms of cost-convex functions. The log-concave property of the likelihood function follows from the inequality.

Journal ArticleDOI
TL;DR: In this article, the authors considered a continuous-time branching random walk on Z d, where the particles are born and die at a single lattice point (the source of branching).
Abstract: We consider a continuous-time branching random walk on Z d , where the particles are born and die at a single lattice point (the source of branching). The underlying random walk is assumed to be symmetric. Moreover, corresponding transition rates of the random walk have heavy tails. As a result, the variance of the jumps is infinite, and a random walk may be transient even on low-dimensional lattices (d = 1, 2). Conditions of transience for a random walk on Z d and limit theorems for the numbers of particles both at an arbitrary point of the lattice and on the entire lattice are obtained.

Journal ArticleDOI
TL;DR: The Kumaraswamy distribution as discussed by the authors is very similar to the Beta distribution but has the key advantage of a closed-form cumulative distribution function, which makes it much better suited than the beta distribution for computation-intensive activities like simulation modeling and the estimation of models by simulation-based methods.
Abstract: The Kumaraswamy distribution is very similar to the Beta distribution but has the key advantage of a closed-form cumulative distribution function. This makes it much better suited than the Beta distribution for computation-intensive activities like simulation modeling and the estimation of models by simulation-based methods. However, in spite of the fact that the Kumaraswamy distribution was introduced in 1980, further theoretical research on the distribution was not developed until very recently (Garg, 2008; Jones, 2009; Mitnik, 2009; Nadarajah, 2008). This article contributes to this recent research and: (a) shows that Kumaraswamy variables exhibit closeness under exponentiation and under linear transformation; (b) derives an expression for the moments of the general form of the distribution; (c) specifies some of the distribution's limiting distributions; and (d) introduces an analytical expression for the mean absolute deviation around the median as a function of the parameters of the distribution, an...

Journal ArticleDOI
TL;DR: Huang et al. as mentioned in this paper proposed an optional randomized response model using a linear combination scrambling which is a generalization of the multiplicative scrambling of Eichhorn and Hayre (1983) and the additive scrambling of Gupta et al (2006, 2010).
Abstract: Huang (2010) proposed an optional randomized response model using a linear combination scrambling which is a generalization of the multiplicative scrambling of Eichhorn and Hayre (1983) and the additive scrambling of Gupta et al. (2006, 2010). In this article, we discuss two main issues. (1) Can the Huang (2010) model be improved further by using a two-stage approach?; (2) Does the linear combination scrambling provide any benefit over the additive scrambling of Gupta et al. (2010)? We will note that the answer to the first question is “yes” but the answer to the second question is “no.”

Journal ArticleDOI
TL;DR: In this article, a kernel density-based regression estimate (KDRE) is proposed to estimate the error density of linear regression models with unknown error distribution, which is shown to be asymptotically as efficient as the oracle maximum likelihood estimate (MLE).
Abstract: For linear regression models with non normally distributed errors, the least squares estimate (LSE) will lose some efficiency compared to the maximum likelihood estimate (MLE). In this article, we propose a kernel density-based regression estimate (KDRE) that is adaptive to the unknown error distribution. The key idea is to approximate the likelihood function by using a nonparametric kernel density estimate of the error density based on some initial parameter estimate. The proposed estimate is shown to be asymptotically as efficient as the oracle MLE which assumes the error density were known. In addition, we propose an EM type algorithm to maximize the estimated likelihood function and show that the KDRE can be considered as an iterated weighted least squares estimate, which provides us some insights on the adaptiveness of KDRE to the unknown error distribution. Our Monte Carlo simulation studies show that, while comparable to the traditional LSE for normal errors, the proposed estimation procedure can h...

Journal ArticleDOI
TL;DR: A repairable system with age-dependent failure type and minimal repair based on a cumulative repair-cost limit policy is studied, where the information of entire repair- cost history is adopted to decide whether the system is repaired or replaced.
Abstract: In this article, a repairable system with age-dependent failure type and minimal repair based on a cumulative repair-cost limit policy is studied, where the information of entire repair-cost history is adopted to decide whether the system is repaired or replaced. As the failures occur, the system has two failure types: (i) a Type-I failure (minor) type that is rectified by a minimal repair, and (ii) a Type-II failure (catastrophic) type that calls for a replacement. We consider a bivariate replacement policy, denoted by (n,T), in which the system is replaced at life age T, or at the n-th Type-I failure, or at the kth Type-I failure (k < n and due to a minor failure at which the accumulated repair cost exceeds the pre-determined limit), or at the first Type-II failure, whichever occurs first. The optimal minimum-cost replacement policy (n,T)* is derived analytically in terms of its existence and uniqueness. Several classical models in maintenance literature could be regard as special cases of the presented...