scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Statistical Computation and Simulation in 2015"


Journal ArticleDOI
TL;DR: In this article, a new class of sensitivity indices based on dependence measures is introduced, which overcomes the theoretical and practical limitations of global sensitivity analysis with variance-based measures, since they focus only on the variance of the output and handle multivariate variables in a limited way.
Abstract: Global sensitivity analysis with variance-based measures suffers from several theoretical and practical limitations, since they focus only on the variance of the output and handle multivariate variables in a limited way. In this paper, we introduce a new class of sensitivity indices based on dependence measures which overcomes these insufficiencies. Our approach originates from the idea to compare the output distribution with its conditional counterpart when one of the input variables is fixed. We establish that this comparison yields previously proposed indices when it is performed with Csiszar f-divergences, as well as sensitivity indices which are well-known dependence measures between random variables. This leads us to investigate completely new sensitivity indices based on recent state-of-the-art dependence measures, such as distance correlation and the Hilbert–Schmidt independence criterion. We also emphasize the potential of feature selection techniques relying on such dependence measures as altern...

130 citations


Journal ArticleDOI
TL;DR: In this paper, restricted cubic splines are used to approximate complex hazard functions in the context of time-to-event data, where the degree of complexity for the spline functions is dictated by the number of knots that are defined.
Abstract: If interest lies in reporting absolute measures of risk from time-to-event data then obtaining an appropriate approximation to the shape of the underlying hazard function is vital. It has previously been shown that restricted cubic splines can be used to approximate complex hazard functions in the context of time-to-event data. The degree of complexity for the spline functions is dictated by the number of knots that are defined. We highlight through the use of a motivating example that complex hazard function shapes are often required when analysing time-to-event data. Through the use of simulation, we show that provided a sufficient number of knots are used, the approximated hazard functions given by restricted cubic splines fit closely to the true function for a range of complex hazard shapes. The simulation results also highlight the insensitivity of the estimated relative effects (hazard ratios) to the correct specification of the baseline hazard.

92 citations


Journal ArticleDOI
TL;DR: In this paper, the gamma-Lomax distribution with an extra positive parameter is proposed and studied, and the structural properties of the new distribution are derived including explicit expressions for the moments, generating and quantile functions, mean deviations and Renyi entropy.
Abstract: For any continuous baseline G distribution, Zografos and Balakrishnan [On families of beta- and generalized gamma-generated distributions and associated inference. Statist Methodol. 2009;6:344–362] proposed a generalized gamma-generated distribution with an extra positive parameter. A new three-parameter continuous distribution called the gamma-Lomax distribution, which extends the Lomax distribution is proposed and studied. Various structural properties of the new distribution are derived including explicit expressions for the moments, generating and quantile functions, mean deviations and Renyi entropy. The estimation of the model parameters is performed by maximum likelihood. We also determine the observed information matrix. An application illustrates the usefulness of the proposed model.

91 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated various properties and methods of estimation of the Weighted Exponential distribution from both frequentist and Bayesian point of view and derived the stochastic ordering, the Bonferroni and the Lorenz curves, various entropies and order statistics for the said distribution.
Abstract: In this article, we investigate various properties and methods of estimation of the Weighted Exponential distribution. Although, our main focus is on estimation (from both frequentist and Bayesian point of view) yet, the stochastic ordering, the Bonferroni and the Lorenz curves, various entropies and order statistics are derived first time for the said distribution. Different types of loss functions are considered for Bayesian estimation. Furthermore, the Bayes estimators and their respective posterior risks are computed and compared using Gibbs sampling. The different reliability characteristics including hazard function, stress and strength analysis, and mean residual life function are also derived. Monte Carlo simulations are performed to compare the performances of the proposed methods of estimation and two real data sets have been analysed for illustrative purposes.

70 citations


Journal ArticleDOI
TL;DR: In this article, a new four-parameter class of generalized Lindley (GL) distribution called the beta-generalized Lindley distribution (BGL) was proposed, which contains the GL and Lindley distributions as special cases, and the properties of these distributions, including hazard functions, reverse hazard function, monotonicity property, shapes, moments, reliability, mean deviations, Bonferroni and Lorenz curves are derived.
Abstract: A new four-parameter class of generalized Lindley (GL) distribution called the beta-generalized Lindley (BGL) distribution is proposed. This class of distributions contains the beta-Lindley, GL and Lindley distributions as special cases. Expansion of the density of the BGL distribution is obtained. The properties of these distributions, including hazard function, reverse hazard function, monotonicity property, shapes, moments, reliability, mean deviations, Bonferroni and Lorenz curves are derived. Measures of uncertainty such as Renyi entropy and s-entropy as well as Fisher information are presented. Method of maximum likelihood is used to estimate the parameters of the BGL and related distributions. Finally, real data examples are discussed to illustrate the applicability of this class of models.

67 citations


Journal ArticleDOI
TL;DR: Chastaing et al. as discussed by the authors developed a numerical method to identify the component functions of the decomposition using the hierarchical orthogonality property, which leads to the definition of generalized sensitivity indices able to quantify the uncertainty of Y due to each dependent input in X.
Abstract: The hierarchically orthogonal functional decomposition of any measurable function η of a random vector X=(X1, … , Xp) consists in decomposing η(X) into a sum of increasing dimension functions depending only on a subvector of X. Even when X1, … , Xp are assumed to be dependent, this decomposition is unique if the components are hierarchically orthogonal. That is, two of the components are orthogonal whenever all the variables involved in one of the summands are a subset of the variables involved in the other. Setting Y=η(X), this decomposition leads to the definition of generalized sensitivity indices able to quantify the uncertainty of Y due to each dependent input in X [Chastaing G, Gamboa F, Prieur C. Generalized Hoeffding–Sobol decomposition for dependent variables – application to sensitivity analysis. Electron J Statist. 2012;6:2420–2448]. In this paper, a numerical method is developed to identify the component functions of the decomposition using the hierarchical orthogonality property. Furthermore,...

64 citations


Journal ArticleDOI
TL;DR: In this paper, a new sensitivity index is proposed based on the modification of the probability density function (pdf) of the random inputs, when the quantity of interest is a failure probability (probability that a model output exceeds a given threshold).
Abstract: Sensitivity analysis of a numerical model, for instance simulating physical phenomena, is useful to quantify the influence of the inputs on the model responses This paper proposes a new sensitivity index, based upon the modification of the probability density function (pdf) of the random inputs, when the quantity of interest is a failure probability (probability that a model output exceeds a given threshold) An input is considered influential if the input pdf modification leads to a broad change in the failure probability These sensitivity indices can be computed using the sole set of simulations that has already been used to estimate the failure probability, thus limiting the number of calls to the numerical model In the case of a Monte Carlo sample, asymptotical properties of the indices are derived Based on Kullback-Leibler divergence, several types of input perturbations are introduced The relevance of this new sensitivity analysis method is analysed through three case studies

58 citations


Journal ArticleDOI
TL;DR: To improve the performance of DC-SIS, an effective iterative procedure based on distance correlation is introduced to detect all truly important predictors and potentially interactions in both linear and nonlinear models.
Abstract: Feature screening and variable selection are fundamental in analysis of ultrahigh-dimensional data, which are being collected in diverse scientific fields at relatively low cost. Distance correlation-based sure independence screening (DC-SIS) has been proposed to perform feature screening for ultrahigh-dimensional data. The DC-SIS possesses sure screening property and filters out unimportant predictors in a model-free manner. Like all independence screening methods, however, it fails to detect the truly important predictors which are marginally independent of the response variable due to correlations among predictors. When there are many irrelevant predictors which are highly correlated with some strongly active predictors, the independence screening may miss other active predictors with relatively weak marginal signals. To improve the performance of DC-SIS, we introduce an effective iterative procedure based on distance correlation to detect all truly important predictors and potentially interactions in ...

57 citations


Journal ArticleDOI
TL;DR: In this paper, the effect of measurement error on the detection abilities of the exponentially weighted moving average (EWMA) control charts for monitoring process mean based on ranked set sampling (RSS), median RSS (MRSS), imperfect RSS (IRSS), and imperfect MRSS (IMRSS) schemes was studied.
Abstract: Control charts are a powerful statistical process monitoring tool often used to monitor the stability of manufacturing processes. In quality control applications, measurement errors adversely affect the performance of control charts. In this paper, we study the effect of measurement error on the detection abilities of the exponentially weighted moving average (EWMA) control charts for monitoring process mean based on ranked set sampling (RSS), median RSS (MRSS), imperfect RSS (IRSS) and imperfect MRSS (IMRSS) schemes. We also study the effect of multiple measurements and non-constant error variance on the performances of the EWMA control charts. The EWMA control chart based on simple random sampling is compared with the EWMA control charts based on RSS, MRSS, IRSS and IMRSS schemes. The performances of the EWMA control charts are evaluated in terms of out-of-control average run length and standard deviation of run lengths. It turns out that the EWMA control charts based on MRSS and IMRSS schemes are bette...

56 citations


Journal ArticleDOI
TL;DR: In this article, the first-order Sobol' indices are computed using Monte Carlo integration and the second-order indices are estimated using only two samples in a 2D space.
Abstract: In variance-based sensitivity analysis, the method of Sobol' (1993) allows to compute Sobol' indices using Monte Carlo integration. One of the main drawbacks of this approach is that the estimation of Sobol' indices requires the use of several samples. For example, in a d-dimensional space, the estimation of all the first-order Sobol' indices requires d+1 samples. Some interesting combinatorial results have been introduced to weaken this defect, in particular by Saltelli (2002) and more recently by Owen (2012) but the quantities they estimate still require O(d) samples. In this paper, we introduce a new approach to estimate for any k all the k-th order Sobol' indices by using only two samples. We establish theoretical properties of such a method for the first-order Sobol' indices and discuss the generalization to higher-order indices. As an illustration, we propose to apply this new approach to a marine ecosystem model of the Ligurian sea (northwestern Mediterranean) in order to study the relative importance of its several parameters. The calibration process of this kind of chemical simulators is well-known to be quite intricate, and a rigorous and robust --- i.e. valid without strong regularity assumptions --- sensitivity analysis, as the method of Sobol' provides, could be of great help.

52 citations


Journal ArticleDOI
Sajid Ali1
TL;DR: In this article, the authors considered the weighted Lindley distribution which belongs to the class of the weighted distributions and investigated various its properties, including stochastic ordering, the Bonferroni and the Lorenz curves, various entropies and order statistics derivations.
Abstract: The weighted distributions provide a comprehensive understanding by adding flexibility in the existing standard distributions. In this article, we considered the weighted Lindley distribution which belongs to the class of the weighted distributions and investigated various its properties. Although, our main focus is the Bayesian analysis however, stochastic ordering, the Bonferroni and the Lorenz curves, various entropies and order statistics derivations are obtained first time for the said distribution. Different types of loss functions are considered; the Bayes estimators and their respective posterior risks are computed and compared. The different reliability characteristics including hazard function, stress and strength analysis, and mean residual life function are also analysed. The Lindley approximation and the importance sampling are described for estimation of parameters. A simulation study is designed to inspect the effect of sample size on the estimated parameters. A real-life application is als...

Journal ArticleDOI
TL;DR: In this article, three methods including a T2-based method, likelihood ratio test (LRT) method and F method are developed and modified in order to be applied in monitoring GLM regression profiles in Phase I.
Abstract: In some industrial applications, the quality of a process or product is characterized by a relationship between the response variable and one or more independent variables which is called as profile. There are many approaches for monitoring different types of profiles in the literature. Most researchers assume that the response variable follows a normal distribution. However, this assumption may be violated in many cases. The most likely situation is when the response variable follows a distribution from generalized linear models (GLMs). For example, when the response variable is the number of defects in a certain area of a product, the observations follow Poisson distribution and ignoring this fact will cause misleading results. In this paper, three methods including a T2-based method, likelihood ratio test (LRT) method and F method are developed and modified in order to be applied in monitoring GLM regression profiles in Phase I. The performance of the proposed methods is analysed and compared for the s...

Journal ArticleDOI
TL;DR: In this article, the authors considered point and interval estimation procedures in the presence of type-I progressively hybrid censored data and derived the expression of the expected number of failures in life testing experiment.
Abstract: The Maxwell (or Maxwell–Boltzmann) distribution was invented to solve the problems relating to physics and chemistry. It has also proved its strength of analysing the lifetime data. For this distribution, we consider point and interval estimation procedures in the presence of type-I progressively hybrid censored data. We obtain maximum likelihood estimator of the parameter and provide asymptotic and bootstrap confidence intervals of it. The Bayes estimates and Bayesian credible and highest posterior density intervals are obtained using inverted gamma prior. The expression of the expected number of failures in life testing experiment is also derived. The results are illustrated through the simulation study and analysis of a real data set is presented.

Journal ArticleDOI
TL;DR: In this paper, an adaptive method of residual life (RL) estimation for some high reliable products based on degradation data has been developed based on two-dimensional degradation data, where a product has two performance characteristics (PCs) and the degradation of each PC over time is governed by a non-stationary gamma degradation process.
Abstract: Due to the growing importance in maintenance scheduling, the issue of residual life (RL) estimation for some high reliable products based on degradation data has been studied quite extensively. However, most of the existing work only deals with one-dimensional degradation data, which may not be realistic in some cases. Here, an adaptive method of RL estimation is developed based on two-dimensional degradation data. It is assumed that a product has two performance characteristics (PCs) and that the degradation of each PC over time is governed by a non-stationary gamma degradation process. From a practical consideration, it is further assumed that these two PCs are dependent and that their dependency can be characterized by a copula function. As the likelihood function in such a situation is complicated and computationally quite intensive, a two-stage method is used to estimate the unknown parameters of the model. Once new degradation information of the product being monitored becomes available, random effe...

Journal ArticleDOI
TL;DR: In this paper, a variable repetitive group sampling plan based on one-sided process capability indices is proposed to deal with lot sentencing for onesided specifications and the parameters of the proposed plans are tabulated for some combinations of acceptance quality levels with commonly used producer's risk and consumer's risk.
Abstract: In this paper, a variable repetitive group sampling plans based on one-sided process capability indices is proposed to deal with lot sentencing for one-sided specifications. The parameters of the proposed plans are tabulated for some combinations of acceptance quality levels with commonly used producer's risk and consumer's risk. The efficiency of the proposed plan is compared with the Pearn and Wu [Critical acceptance values and sample sizes of a variables sampling plan for very low fraction of defectives. Omega – Int J Manag Sci. 2006;34(1):90–101] plan in terms of sample size and the power curve. One example is given to illustrate the proposed methodology.

Journal ArticleDOI
TL;DR: In this article, the authors compared the performance of the Shapiro-Francia (SF) normality test to other normality tests by studying the distribution of their p-values, and found that the SF test was the best test statistic in detecting deviation from normality among the nine tests considered at all.
Abstract: The Shapiro–Francia (SF) normality test is an important test in statistical modelling. However, little has been done by researchers to compare the performance of this test to other normality tests. This paper therefore measures the performance of the SF and other normality tests by studying the distribution of their p-values. For the purpose of this study, we selected eight well-known normality tests to compare with the SF test: (i) Kolmogorov–Smirnov (KS), (ii) Anderson–Darling (AD), (iii) Cramer von Mises (CM), (iv) Lilliefors (LF), (v) Shapiro–Wilk (SW), (vi) Pearson chi-square (PC), (vii) Jarque– Bera (JB) and (viii) D'Agostino (DA). The distribution of p-values of these normality tests were obtained by generating data from normal distribution and well-known symmetric non-normal distribution at various sample sizes (small, medium and large). Our simulation results showed that the SF normality test was the best test statistic in detecting deviation from normality among the nine tests considered at all ...

Journal ArticleDOI
TL;DR: In this article, the prediction of a future observation based on a type-I hybrid censored sample when the lifetime distribution of experimental units is assumed to be a Weibull random variable is considered.
Abstract: In this paper, we consider the prediction of a future observation based on a type-I hybrid censored sample when the lifetime distribution of experimental units is assumed to be a Weibull random variable. Different classical and Bayesian point predictors are obtained. Bayesian predictors are obtained using squared error and linear-exponential loss functions. We also provide a simulation consistent method for computing Bayesian prediction intervals. Monte Carlo simulations are performed to compare the performances of the different methods, and one data analysis has been presented for illustrative purposes.

Journal ArticleDOI
TL;DR: The double gamma difference distribution was introduced by Augustyniak et al. as discussed by the authors, and it is well known in financial econometrics: it is the symmetric variance gamma family of distributions.
Abstract: It is the aim of this note to point out that the double gamma difference distribution recently introduced by [Augustyniak M, and Doray, LG. Inference for a leptokurtic symmetric family of distributions represented by the difference of two gamma variables. J Statist Comput Simul. 2012;82:1621–1634] is well known in financial econometrics: it is the symmetric variance gamma family of distributions. We trace back to the various origins of this distribution. In addition, we consider in some detail the difference of two independent gamma distributed random variables with different shape parameters.

Journal ArticleDOI
TL;DR: In this paper, a new control chart is proposed by using an auxiliary variable and repetitive sampling in order to enhance the performance of detecting a shift in process mean, which is based on the outer and inner control limits so that repetitive sampling is allowed when the plotted statistic falls between the two limits.
Abstract: In this paper, a new control chart is proposed by using an auxiliary variable and repetitive sampling in order to enhance the performance of detecting a shift in process mean. The product-difference type estimator of the mean is plotted on the proposed control chart, which utilizes the information of an auxiliary variable correlated with the main quality variable. The proposed control chart is based on the outer and inner control limits so that repetitive sampling is allowed when the plotted statistic falls between the two limits. The average run length (ARL) of the proposed control chart is evaluated using the Monte Carlo simulation. The proposed control chart is compared with the Riaz M control chart and the results show the outperformance of the proposed control chart in terms of the ARL.

Journal ArticleDOI
TL;DR: In this paper, the estimation of the reliability R =P(X
Abstract: The aim of this paper is to study the estimation of the reliability R=P(X

Journal ArticleDOI
TL;DR: A regression tree-based gradient boosting estimator for nonparametric multiple expectile regression is derived, referred to as ER-Boost, which is applied to analyse North Carolina County crime data and provides a good demonstration of some nice features of ER- boost, such as its ability to handle different types of covariates and its model interpretation tools.
Abstract: Expectile regression [Newey W, Powell J. Asymmetric least squares estimation and testing, Econometrica. 1987;55:819–847] is a nice tool for estimating the conditional expectiles of a response variable given a set of covariates. Expectile regression at 50% level is the classical conditional mean regression. In many real applications having multiple expectiles at different levels provides a more complete picture of the conditional distribution of the response variable. Multiple linear expectile regression model has been well studied [Newey W, Powell J. Asymmetric least squares estimation and testing, Econometrica. 1987;55:819–847; Efron B. Regression percentiles using asymmetric squared error loss, Stat Sin. 1991;1(93):125.], but it can be too restrictive for many real applications. In this paper, we derive a regression tree-based gradient boosting estimator for nonparametric multiple expectile regression. The new estimator, referred to as ER-Boost, is implemented in an R package erboost publicly available ...

Journal ArticleDOI
TL;DR: In this paper, a data-dependent method for choosing the tuning parameter appearing in many recently developed goodness-of-fit test statistics is proposed, which is applicable to a class of distributions for which the null distribution of the test statistic is independent of unknown parameters.
Abstract: We propose a data-dependent method for choosing the tuning parameter appearing in many recently developed goodness-of-fit test statistics. The new method, based on the bootstrap, is applicable to a class of distributions for which the null distribution of the test statistic is independent of unknown parameters. No data-dependent choice for this parameter exists in the literature; typically, a fixed value for the parameter is chosen which can perform well for some alternatives, but poorly for others. The performance of the new method is investigated by means of a Monte Carlo study, employing three tests for exponentiality. It is found that the Monte Carlo power of these tests, using the data-dependent choice, compares favourably to the maximum achievable power for the tests calculated over a grid of values of the tuning parameter.

Journal ArticleDOI
TL;DR: In this paper, a new estimator of Kullback-Leibler loss in Gaussian Graphical models is presented, which provides a computationally fast alternative to cross-validation.
Abstract: We study the problem of selecting a regularization parameter in penalized Gaussian graphical models. When the goal is to obtain a model with good predictive power, cross-validation is the gold standard. We present a new estimator of Kullback–Leibler loss in Gaussian Graphical models which provides a computationally fast alternative to cross-validation. The estimator is obtained by approximating leave-one-out-cross-validation. Our approach is demonstrated on simulated data sets for various types of graphs. The proposed formula exhibits superior performance, especially in the typical small sample size scenario, compared to other available alternatives to cross-validation, such as Akaike's information criterion and Generalized approximate cross-validation. We also show that the estimator can be used to improve the performance of the Bayesian information criterion when the sample size is small.

Journal ArticleDOI
TL;DR: The clustering results prove that the proposed algorithm is able to automatically group the pdfs and provide the optimal cluster number without any a priori information, and the performance study shows that the algorithm is more efficient than existing ones.
Abstract: We propose an intuitive and computationally simple algorithm for clustering the probability density functions (pdfs). A data-driven learning mechanism is incorporated in the algorithm in order to determine the suitable widths of the clusters. The clustering results prove that the proposed algorithm is able to automatically group the pdfs and provide the optimal cluster number without any a priori information. The performance study also shows that the proposed algorithm is more efficient than existing ones. In addition, the clustering can serve as the intermediate compression tool in content-based multimedia retrieval that we apply the proposed algorithm to categorize a subset of COREL image database. And the clustering results indicate that the proposed algorithm performs well in colour image categorization.

Journal ArticleDOI
TL;DR: In this paper, the problem of making statistical inference on unknown parameters of a lognormal distribution under the assumption that samples are progressively censored is considered, and the maximum likelihood estimates (MLEs) are obtained by using the expectation-maximization algorithm.
Abstract: We consider the problem of making statistical inference on unknown parameters of a lognormal distribution under the assumption that samples are progressively censored. The maximum likelihood estimates (MLEs) are obtained by using the expectation-maximization algorithm. The observed and expected Fisher information matrices are provided as well. Approximate MLEs of unknown parameters are also obtained. Bayes and generalized estimates are derived under squared error loss function. We compute these estimates using Lindley's method as well as importance sampling method. Highest posterior density interval and asymptotic interval estimates are constructed for unknown parameters. A simulation study is conducted to compare proposed estimates. Further, a data set is analysed for illustrative purposes. Finally, optimal progressive censoring plans are discussed under different optimality criteria and results are presented.

Journal ArticleDOI
TL;DR: This work proposes to use modern algorithm configuration techniques, e.g. iterated F-racing, to efficiently move through the model hypothesis space and to simultaneously configure algorithm classes and their respective hyperparameters.
Abstract: Many different models for the analysis of high-dimensional survival data have been developed over the past years. While some of the models and implementations come with an internal parameter tuning automatism, others require the user to accurately adjust defaults, which often feels like a guessing game. Exhaustively trying out all model and parameter combinations will quickly become tedious or infeasible in computationally intensive settings, even if parallelization is employed. Therefore, we propose to use modern algorithm configuration techniques, e.g. iterated F-racing, to efficiently move through the model hypothesis space and to simultaneously configure algorithm classes and their respective hyperparameters. In our application we study four lung cancer microarray data sets. For these we configure a predictor based on five survival analysis algorithms in combination with eight feature selection filters. We parallelize the optimization and all comparison experiments with the BatchJobs and BatchExperime...

Journal ArticleDOI
TL;DR: In this article, an exponentially weighted moving average (EWMA) control chart for the shape parameter β of Weibull processes is proposed, which is based on a moving range when a single measurement is taken per sampling period.
Abstract: In this article, we propose an exponentially weighted moving average (EWMA) control chart for the shape parameter β of Weibull processes. The chart is based on a moving range when a single measurement is taken per sampling period. We consider both one-sided (lower-sided and upper-sided) and two-sided control charts. We perform simulations to estimate control limits that achieve a specified average run length (ARL) when the process is in control. The control limits we derive are ARL unbiased in that they result in ARL that is shorter than the stable-process ARL when β has shifted. We also perform simulations to determine Phase I sample size requirements if control limits are based on an estimate of β. We compare the ARL performance of the proposed chart to that of the moving range chart proposed in the literature.

Journal ArticleDOI
TL;DR: In this article, the problem of assessing prediction for count time series based on either the Poisson distribution or the negative binomial distribution was considered, and different criteria were employed to study the prediction problem.
Abstract: We consider the problem of assessing prediction for count time series based on either the Poisson distribution or the negative binomial distribution. By a suitable parametrization we employ both distributions with the same mean. We regress the mean on its past values and the values of the response and after obtaining consistent estimators of the regression parameters, regardless of the response distribution, we employ different criteria to study the prediction problem. We show by simulation and data examples that scoring rules and diagnostic graphs that have been proposed for independent but not identically distributed data can be adapted in the setting of count dependent data.

Journal ArticleDOI
TL;DR: In this paper, a multi-dimensional degradation data was taken into account for the target product with multivariate performance characteristics (PCs) to estimate the residual life (RL) of the product, where the degradation of PC over time is governed by a multivariate Wiener process with nonlinear drifts.
Abstract: For some operable products with critical reliability constraints, it is important to estimate accurately their residual lives so that maintenance actions can be arranged suitably and efficiently. In the literature, most publications have dealt with this issue by only considering one-dimensional degradation data. However, this may be not reasonable in situations wherein a product may have two or more performance characteristics (PCs). In such situations, multi-dimensional degradation data should be taken into account. Here, for the target product with multivariate PCs, methods of residual life (RL) estimation are developed. This is done with the assumption that the degradation of PCs over time is governed by a multivariate Wiener process with nonlinear drifts. Both the population-based degradation information and the degradation history of the target product up-to-date are combined to estimate the RL of the product. Specifically, the population-based degradation information is first used to obtain the esti...

Journal ArticleDOI
TL;DR: In this article, a new generalized Lindley distribution, based on weighted mixture of two gamma distributions, is proposed, which includes the Lindley, gamma and exponential distributions as and other forms of Lindley distributions as special cases.
Abstract: A new generalized Lindley distribution, based on weighted mixture of two gamma distributions, is proposed. This model includes the Lindley, gamma and exponential distributions as and other forms of Lindley distributions as special cases. Lindley distribution based on two gamma with two consecutive shape parameter is investigated in some details. Statistical and reliability properties of this model are derived. The size-biased, the length-biased and Lorenze curve are established. Estimation of the underlying parameters via the moment method and maximum likelihood has been investigated and their values are simulated. Finally, fitting this model to a set of real-life data is discussed.